• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 124
Sort:
by name
by year
Article
Петренко А. К., Кулямин В. В., Хорошилов А. В. Труды Института системного программирования РАН. 2015. Т. 27. № 5. С. 175-190.
Added: Aug 28, 2017
Article
Казаков К., Семенов В. А. Труды Института системного программирования РАН. 2017. Т. 29. № 5. С. 185-238.
Added: Dec 12, 2018
Article
Аничкин А., Семенов В. А. Труды Института системного программирования РАН. 2017. Т. 29. № 3. С. 247-296.
Added: Dec 12, 2018
Article
С.Д. Кузнецов Труды Института системного программирования РАН. 2015. Т. 27. № 1. С. 173-192.

In 2005, I wrote an article in which I discussed the most important features of the standards ODMG 3.0 (ODMG object model) and the SQL:2003 (SQL data model) and convincingly (as it seemed to me) argued that the similarity between the object model and object extensions to SQL is purely external, that close in mind syntactic constructions hide deep differences at the model level. Since then, it took many years for which I understood many things that were wrongly or insufficient correctly understood by me then, and gradually came to the conclusions that: 1.    differences that seemed to me deep, are not such, and indeed are not differences at the model level; 2.    the object extensions of SQL provide no less (and rather large) capabilities than the ODMG object model; 3.    reasonably (from the standpoint of the database community) used DBMSes based on the ODMG data model, one will create databases and tools to manipulate them similar to those prescribed by the SQL data model.

Added: Jan 23, 2018
Article
Дворянский Л. В. Труды Института системного программирования РАН. 2011. № 20. С. 71-94.
Added: Sep 20, 2012
Article
Кудряшов Е. А., Мельник Д. М., Монаков А. В. Труды Института системного программирования РАН. 2016. Т. 28. № 1. С. 63-80.

The paper discusses an optimization approach for external calls in positionindependent code that is based on loading the callee address immediately at the call site from the Global Offset Table (GOT), avoiding the use of the Procedure Linkage Table (PLT). Normally the Linux toolchain creates the PLT both in the main executable (which comprises position-dependent code and has to rely on the PLT mechanism to make external calls) and in shared libraries, where the PLT serves to implement lazy binding of dynamic symbols, but is not required otherwise. However, calls via the PLT have some overhead due to an extra jump instruction and poorer instruction cache locality. On some architectures, binary interface of PLT calls constrains compiler optimization at the call site. It is possible to avoid the overhead of PLT calls by loading the callee address from the GOT at the call site and performing an indirect call, although it prevents lazy symbol resolution and may cause increase in code size. We implement this code generation variant in GCC compiler for x86 and ARM architectures. On ARM, loading the callee address from the GOT at call site normally needs a complex sequence with three load instructions. To improve that, we propose new relocation types that allow to build a PC-relative address of a given GOT slot with a pair of movt, movw instructions, and implement these relocation types in GCC and Binutils (assembler and linker) for both ARM and Thumb-2 modes. Our evaluation results show that proposed optimization yields performance improvements on both x86 (up to 12% improvement with Clang/LLVM built with multiple shared libraries, on big translation units) and ARM (up to 7% improvement with SQLite, average over several tests), even though code size on ARM also grows by 13-15%.

Added: Nov 5, 2018
Article
Аветисян А. И., Мельник Д., Курмангалеев Ш. Ф. и др. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 343-356.

The paper describes the workflow for optimizing programs for performance targeting the fixed hardware architecture with static compilation using GCC and LLVM compilers as examples. The workflow has gradually grown within ISP RAS Compiler Technology Team when working on GCC and LLVM compiler optimization. We start with preparing a benchmark using the given application as a source, and then proceed in parallel with manual analysis of generated assembly code and automatic compiler tuning tool. In general, a compiler optimization improvement produced by the manual analysis gives 1-5% speedup, while the automatic tuning results may give up to 10-20% speedup. However, the manual analysis results are usually valid for the broad class of applications and are contributed to the open source compilers, while the automatic tuning results make sense only for the given application. We present some of the optimizations performed, e.g. improved NEON and Thumb-2 support for GCC, vectorization improvements for LLVM, register allocation improvements for LLVM, and the corresponding evaluation results. We also describe TACT, a tool for automatic compiler tuning for the given application mentioned above, and its example use cases both for an application developer and a compiler engineer. We give the sample of TACT optimization results.

Added: Mar 22, 2017
Article
Емеленко А., Маллачиев К., Пакулин Н. В. Труды Института системного программирования РАН. 2017. Т. 29. № 4. С. 295-302.
Added: Feb 12, 2018
Article
С.Д, Кузнецов, Мендкович Н. А. Труды Института системного программирования РАН. 2013. Т. 25. С. 113-130.

This paper describes enhanced algorisms of lexical optimization query. These algorisms detect and remove redundant conditions from query restriction to simplify it. The paper also presents results of implementation of these optimization techniques and those effects on query processing speed. The paper includes four sections. The first section (Introduction) provides general context of the paper. The second section describes three proposed algorithms of lexical query optimization. The first one is the algorithm of absorption. This algorithm allows to find and remove a wide set of conditions that are redundant but are not equal textually even after standardization of whole restriction expression. The second algorithm is an adaptation of well-known Quin-McCluskey algorithm initially designed for minimization of Boolean functions. The last algorithm of lexical query optimization is based on techniques for optimization of systems of linear inequalities. The third section of the paper discusses results of efficiency evaluation of the proposed algorithms. The forth section concludes the paper.

Added: Jan 30, 2018
Article
С.Д. Кузнецов, Мендкович Н. А. Труды Института системного программирования РАН. 2013. Т. 25. С. 113-130.

This paper describes enhanced algorisms of lexical optimization query. These algorisms detect and remove redundant conditions from query restriction to simplify it. The paper also presents results of implementation of these optimization techniques and those effects on query processing speed. The paper includes four sections. The first section (Introduction) provides general context of the paper. The second section describes three proposed algorithms of lexical query optimization. The first one is the algorithm of absorption. This algorithm allows to find and remove a wide set of conditions that are redundant but are not equal textually even after standardization of whole restriction expression. The second algorithm is an adaptation of well-known Quin-McCluskey algorithm initially designed for minimization of Boolean functions. The last algorithm of lexical query optimization is based on techniques for optimization of systems of linear inequalities. The third section of the paper discusses results of efficiency evaluation of the proposed algorithms. The forth section concludes the paper.

Added: Nov 6, 2017
Article
Дробышевский М., Коршунов А., Turdakov D. Y. Proceedings of the Institute for System Programming of the RAS. 2016. Vol. 28. No. 6. P. 153-170.
Added: Aug 28, 2017
Article
Трофимович Ю., Козлов И., Турдаков Д. Ю. Труды Института системного программирования РАН. 2016. Т. 28. № 6. С. 185-196.
Added: Aug 28, 2017
Article
Гетьман А. И., Маркин Ю. В., Обыденков Д. О. и др. Труды Института системного программирования РАН. 2016. Т. 28. № 6. С. 103-110.
Added: Sep 5, 2019
Article
Подымов В. В., Захаров В. А. Труды Института системного программирования РАН. 2014. Т. 26. № 3. С. 145-166.
Added: Sep 29, 2015
Article
Захаров В. А., Новикова Т. А. Труды Института системного программирования РАН. 2012. Т. 22. С. 435-455.
Strong (logic&term) equivalence of programs is the weakest decidable equivalence relation which approximates the functional equivalence of programs. In this paper we develop a new variant of the algorithm for checking strong equivalence of programs. A distinguished feature of our algorithm is that it relies completely on the algebra of finite substitutions which includes the operations of composition and antiunification. The paper begins with a short introduction to the theory of first-order substitutions. In the next section the formal first-order model of sequential programs and strong equivalence are defined. We also describe a preliminary variant of equivalence checking algorithm. This algorithm deals with composition and antiunification operations on finite substitutions. Since the complexity of these operations depends on data structures that represent substitutions, we consider in Section 4 graph-theoretic representations of substitutions as well as upper and lower bounds on the complexity of basic operations on substitutions. To reduce the complexity of equivalence checking algorithm for sequential programs we introduce in Section 5 a new class of truncated substitutions and study the basic algebraic properties of composition and antiunification operations on truncated substitutions. In Section 6 we showed that both operations on truncated substitutions can be performed in time O(n^2) . Finally, in Section 7 using the properties of truncated substitutions we introduced an improved version of the equivalence checking algorithm and proved that its time complexity is O(n^6 ) on the size of programs to be checked.
Added: Sep 30, 2015
Article
Федотов А., Каушан В., Гайсарян С. С. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 6. С. 151-162.

Approaches for code execution using program vulnerabilities are considered in this paper. Particularly, ways of code execution using buffer overflow on stack and on heap, using use-after-free vulnerabilities and format string vulnerabilities are examined in section 2. Methods for automatic generation input data, leading to code execution are described in section 3. This methods are based on dynamic symbolic execution. Dynamic symbolic execution allows to gain input data, which leads program along the path of triggering vulnerability. The security predicate is an extra set of symbolic formulas, describing program's state in which code execution is possible. To get input data, leading to code execution, path and security predicates need to be united, and then the whole system should be solved. Security predicates for pointer overwrite, function pointer overwrite and format string vulnerability, that leads to stack buffer overflow are presented in the paper. Represented security predicates were used in method for software defect severity estimation. The method was applied to several binaries from Darpa Cyber Grand Challenge. Testing security predicate for format string vulnerability, that leads to buffer overflow was conducted on vulnerable version of Ollydbg. As a result of testing it was possible to obtain input data that leads to code execution.

Added: Feb 12, 2018
Article
Гуськова М. С., Бараш Л. Ю., Щур Л. Н. Труды Института системного программирования РАН. 2018. Т. 30. № 1. С. 115-126.

The generation of uniformly distributed random numbers is necessary for computer simulation by Monte Carlo methods and molecular dynamics. Generators of pseudo-random numbers (GPRS) are used to generate random numbers. GPRS uses deterministic algorithms to calculate numbers, but the sequence obtained in this way has the properties of a random sequence. For a number of problems using Monte Carlo methods, random number generation takes up a significant amount of computational time, and increasing the generation capacity is an important task. This paper describes applying SIMD instructions (Single Instruction Multiple Data) to parallelize generation of pseudorandom numbers. We review SIMD instruction set extensions such as MMX, SSE, AVX2, AVX512. The example of AVX512 implementation is given for the LFSR113 pseudorandom number generator. Performance is compared for different algorithm implementations.

Added: Mar 26, 2018
Article
Захаров В. А., Новикова Т. А. Труды Института системного программирования РАН. 2011. Т. 21. С. 141-166.
Many problems in software engineering such as program refactoring, deobfuscation, vulnerability detection, require an efficient toolset for detecting pieces of code that have similar behavior. Current state of art in software clone detection makes it possible to find out only those pieces of code which have the same syntactic structure since. A more profound analysis of functional properties of programs encounters with undecidability of equivalence checking problem for Turing-complete models of computation. To overcome this principal deficiency of modern clone detection techniques we suggest to use equivalence relations on programs that are more strong than functional equivalence. One of such equivalences is a so called logic&term program equivalence introduced by V.E. Itkin in 1972 г. In this paper we build a new algorithm for checking logic&term equivalence of programs. This algorithm is based on the operations for computing the least upper bound and the greatest upper bound in the lattice of finite substitutions. Using this algorithm we also develop a procedure for solving the unification problem for sequential programs, i.e. the problem of computing the most general common instance (specialization) of a given pair of programs (pieces of code).
Added: Sep 30, 2015
Article
Захаров В. А., Подымов В. В. Труды Института системного программирования РАН. 2015. Т. 27. № 4.

Equivalence checking algorithms found vast applications in system programming; they are used in software refactoring, security checking, malware detection, program integration, regression verification, compiler verification and validation. In this paper we show that equivalence checking procedures can be utilized for the development of global optimization transformation of programs. We consider minimization problem for two formal models of programs: deterministic finite state transducers over finitely generated decidable groups that are used as a model of sequential reactive programs, and deterministic program schemata that are used as a model of sequential imperative programs. Minimization problem for both models of programs can be solved following the same approach that is used for minimization of finite state automata by means of two basic optimizing transformations, namely, removing of useless states and joining equivalent states. The main results of the paper are Theorems 1 and 2. Theorems 1. If G is a finitely generated group and the word problem in G is decidable in polynomial time then minimization problem for finite state deterministic transducers over G is decidable in polynomial time as well. Theorem 2. If S is a decidable left-contracted ordered semigroup of basic program statements and the word problem in S is decidable in polynomial time then minimization problem for program schemata operating on the interpretation over S is decidable in polynomial time as well.

Added: Oct 13, 2015
Article
С.С.Гайсарян, Нурмухаметов А., Курмангалеев Ш. и др. Труды Института системного программирования РАН. 2014. Т. 26. № 3. С. 113-126.
Added: Sep 13, 2016
Article
С.С.Гайсарян, Курмангалеев Ш., Долгорукова К. и др. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 315-326.
Added: Sep 13, 2016