• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 114
Sort:
by name
by year
Article
Kuliamin V., Lavrischeva E. M., Mutilin V. S. et al. Proceedings of the Institute for System Programming of the RAS. 2016. Vol. 28. No. 3. P. 189-208.

This paper regards problems of analysis and verification of complex modern operating systems, which should take into account variability and configurability of those systems. The main problems of current interest are related with conditional compilation as variability mechanism widely used in system software domain. It makes impossible fruitful analysis of separate pieces of code combined into system variants, because most of these pieces of code has no interface and behavior. From the other side, analysis of all separate variants is also impossible due to their enormous number. The paper provides an overview of analysis methods that are able to cope with the stated problems, distinguishing two classes of such approaches: analysis of variants sampling based on some variants coverage criteria and variation-aware analysis processing many variants simultaneously and using similarities between them to minimize resources required. For future development we choose the most scalable technics, sampling analysis based on code coverage and on coverage of feature combinations and variation-aware analysis using counterexample guided abstraction refinement approach.

Added: Aug 11, 2018
Article
Силаков Д. В. Труды Института системного программирования РАН. 2008. Т. 14. № 2. С. 159-178.
Added: Sep 22, 2015
Article
Силаков Д. В., Зеленов С. В. Труды Института системного программирования РАН. 2006. Т. 9. С. 129-142.
Added: Sep 22, 2015
Article
Кузнецов С. Д., Турдаков Д. Ю., Борисенко О. Д. Труды Института системного программирования РАН. 2014. Т. 26. № 4. С. 33-44.

This article is dedicated to automation of cluster creation and management for Apache Spark MapReduce implementation in Openstack environments. As a result of this project open-source (Apache 2.0 license) implementation of toolchain for virtual cluster on-demand creation in Openstack environments was presented. The article contains an overview of existing solutions for clustering automation in cloud environments by the start of 2014 year. The article provides a shallow overview of issues and problems in Openstack Heat project that provides a compatibility layer for Amazon EC2 API. The final implementation provided in the article is almost strainforward port of existing toolchain for cluster creation automation for Apache Spark in Amazon EC2 environment with some improvements. Also prepared base system virtual machine image for Openstack is provided. Plans for further work are connected with Ansible project. Using Ansible for observed problem will make possible to implement versatile environment-agnostic solution that is able to work using any cloud computing services provider, set of Docker containers or bare-metal clusters without any dependencies for prepared operating system image. Current article doesn't use Ansible due to the lack of key features at the moment of the project start. The solution provided in this article has been already tested in production environment for graph theory research arcticle.

Added: Nov 26, 2017
Article
Вялый М. Н. Труды Института системного программирования РАН. 2004. Т. 6. С. 51-64.
Added: Oct 17, 2014
Article
С.Д. Кузнецов, Прохоров А. А. Труды Института системного программирования РАН. 2012. Т. 23. С. 173-194.

One of the most important ways of increasing the speed of the modern databases is to cache frequently used data in RAM. Classical replacement policies are intended to minimize the number of buffer pool faults. This optimization method implicitly relies on the fact that the speeds of reading and writing to the hard disc are equal. Gradual technology improvement and cost reduction of flash memory have led to the creation of solid-state data storages (SSD) that are now increasingly used in personal computers and storage systems. Flash drives have advantages over traditional hard drives, high read and write speeds and significantly small time of random data access are the most important of them. However, the most popular flash-memory types read data at a higher speed than write it. Due to this feature the use of classical replacement algorithms of disk data caching is ineffective. This paper reviews recently developed algorithms of database buffer pool management designed to work with flash memory drives: CFDC (Clean First – Dirty Clustered), CASA (Cost-Aware Self- Adaptive), SAWC (Self Adaptive with Write Clustering), and FD-Buffer. Some of these algorithms demonstrate significant advantages over the classical algorithm LRU.

Added: Nov 1, 2017
Article
Лаврищева Е. М., Пакулин Н. В., Рыжов А. Г. и др. Труды Института системного программирования РАН. 2018. Т. 30. № 3. С. 99-120.
Added: Aug 11, 2018
Article
Сергей Кузнецов, Денис Турдаков, Коршунов А. В. и др. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 439-456.
Added: Nov 25, 2017
Article
С.С.Гайсарян, Самоваров О. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 403-420.
Added: Sep 13, 2016
Article
Клеменков П. А., Кузнецов С. Д. Труды Института системного программирования РАН. 2012. Т. 23. С. 143-158.

Big data challenged traditional storage and analysis systems in several new ways. In this paper we try to figure out how to overcome this challenges, why it's not possible to make it efficiently and describe three modern approaches to big data handling: NoSQL, MapReduce and real-time stream processing. The first section of the paper is the introduction. The second section discuss main issues of Big Data: volume, diversity, velocity, and value. The third section describes different approaches to solving the problem of Big Data. Traditionally one might use a relational DBMS. The paper propose some steps that allow to continue RDBMS using when it’s capacity becomes not enough. Another way is to use a NoSQL approach. The basic ideas of the NoSQL approach are: simplification, high throughput, and unlimited scaling out. Different kinds of NoSQL stores allow to use such systems in different applications of Big Data. MapReduce and it’s free implementation Hadoop may be used to provide scaling out Big Data analytics. Finally, several data management products support real time stream processing under Big Data. The paper briefly overviews these products. The final section of the paper is the conclusion.

Added: Oct 31, 2017
Article
Кулямин В. В., Лаврищева Е., Мутилин В. и др. Труды Института системного программирования РАН. 2016. Т. 28. № 3. С. 189-208.
Added: Aug 28, 2017
Article
Кошкарев А., Медведев А., Вишняков Ю. и др. Труды Института системного программирования РАН. 2012. Т. 23. С. 245-256.
Added: Jan 26, 2016
Article
Татарников А. Д., Камкин А. С., Проценко А. С. и др. Труды Института системного программирования РАН. 2016. Т. 28. № 6. С. 87-102.

ARM is a family of microprocessor instruction set architectures developed in a company with the same name. The newest architecture of this family, ARMv8, contains a large number of instructions of various types and is notable for its complex organization of virtual memory, which includes hardware support for multilevel address translation and virtualization. All of this makes functional verification of microprocessors with this architecture an extremely difficult technical task. An integral part of microprocessor verification is generation of test programs, i.e. programs in the assembly language, which cause various situations (exceptions, pipeline stalls, branch mispredictions, data evictions in caches, etc.). The article describes the requirements for industrial test program generators and presents a generator for microprocessors with the ARMv8 architecture, which has been developed with the help of MicroTESK (Microprocessor TEsting and Specification Kit). The generator supports an instruction subset typical for mobile applications (about 400 instructions) and consists of two main parts: (1) an architecture-independent core and (2) formal specifications of ARMv8 or, more precisely, a model automatically constructed on the basis of the formal specifications. With such a structure, the process of test program generator development consists mainly in creation of formal specifications, which saves efforts by reusing architecture-independent components. An architecture is described using the nML and mmuSL languages. The first one allows describing the microprocessor registers and syntax and semantics of the instructions. The second one is used to specify the memory subsystem organization (address spaces, various buffers and tables, address translation algorithms, etc.) The article describes characteristics of the developed generator and gives a comparison with the existing analogs.

Added: Nov 24, 2017
Article
Камкин А. С., Чупилко М., Лебедев М. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 4. С. 247-256.

Hardware testing is a process aimed at detecting manufacturing faults in integrated circuits. To measure test quality, two main metrics are in use: fault detection abilities (fault coverage) and test application time (test length). Many algorithms have been suggested for test generation; however, no scalable solution exists. In this paper, we analyze applicability of functional tests generated from high-level models for low-level manufacturing testing. A particular test generation method is considered. The input information is an HDL description. The key steps of the method are system model construction and coverage model construction. Both models are automatically extracted from the given description. The system model is a representation of the design in the form of high-level decision diagrams. The coverage model is a set of LTL formulae defining reachability conditions for the transitions of the extended finite state machine. The models are translated into the input format of a model checker. For each coverage model formula the model checker generates a counterexample, i.e. an execution that violates the formula (makes the corresponding transition to fire). The approach is intended for covering of all possible execution paths of the input HDL description and detecting dead code. Experimental comparison with the existing analogues has shown that it produces shorter tests, but they achieve lower stuck-at fault coverage comparing with the dedicated approach. An improvement has been proposed to overcome the issue. 

Added: Nov 6, 2017
Article
Захаров В. А., Новикова Т. А. Труды Института системного программирования РАН. 2014. Т. 26. № 2. С. 245-268.
It is generally accepted that to unify a pair of substitutions θ_1 and θ_2 means to find out a pair of substitutions η' and η'' such that the compositions θ_1 η' and θ_2 η'' are the same. Actually, unification is the problem of solving linear equations of the form θ_1 X=θ_2 Y in the semigroup of substitutions. But some other linear equations on substitutions may be also viewed as less common variants of unification problem. In this paper we introduce a two-sided unification as the process of bringing a given substitution θ_1 to another given substitution θ_2 from both sides by giving a solution to an equation Xθ_1 Y=θ_2. Two-sided unification finds some applications in software refactoring as a means for extracting instances of library subroutines in arbitrary pieces of program code. In this paper we study the complexity of two-sided unification for substitutions and programs. In Section 1 we discuss the concept of unification on substitutions and programs, outline the recent results on program equivalence checking that involve the concept of unification and anti-unification, and show that the problem of library subroutine extraction may be viewed as the problem of two-sided unification for programs. In Section 2 we formally define the concept of two-sided unification on substitutions and show that the problem of two-unifiability is NP-complete (in contrast to P-completeness of one-sided unifiability problem). NP-hardness of two-sided unifiability problem is established in Section 4 by reducing to it the bounded tiling problem which is known to be NP-complete; the latter problem is formally defined in Section 3. In Section 5 we define two-sided unification of sequential programs in the first-order model of programs supplied with strong equivalence (logic&term equivalence) of programs and proved that this problem is NP-complete. This is the main result of the paper.
Added: Sep 30, 2015
Article
Монаков А., Платонов В., Аветисян А. И. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 357-374.

The article proposes methods for supporting development of efficient programs for modern parallel architectures, including hybrid systems. First, specialized profiling methods designed for programmers tasked with parallelizing existing code are proposed. The first method is loop-based profiling via source-level instrumentation done with Coccinelle tool. The second method is memory reuse distance estimation via virtual memory protection mechanism and manual instrumentation. The third method is cache miss and false sharing estimation by collecting a partial trace of memory accesses using compiler instrumentation and estimating cache behavior in postprocessing based on the trace and a cache model. Second, the problem of automatic parallel code generation for hybrid architectures is discussed. Our approach is to generate OpenCL code from parallel loop nests based on GRAPHITE infrastructure in the GCC compiler. Finally, in cases where achieving high efficiency on hybrid systems requires significant rework of data structures or algorithms, one can employ auto-tuning to specialize for specific input data and hardware at run time. This is demonstrated on the problem of optimizing sparse matrix-vector multiplication for GPUs and its use for accelerating linear system solving in OpenFOAM CFD package. We propose a variant of “sliced ELLPACK” sparse matrix storage format with special treatment for small horizontal or diagonal blocks, where the exact parameters of matrix structure and GPU kernel launch should be automatically tuned at runtime for the specific matrix and GPU hardware.

Added: Mar 22, 2017
Article
Силаков Д. В. Труды Института системного программирования РАН. 2011. Т. 20. С. 25-36.
Added: Sep 22, 2015
Article
Маллачиев К., Пакулин Н. В., Хорошилов А. В. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 4. С. 283-294.
Added: Nov 9, 2017
Article
Недумов Я. Р., Кузнецов С. Д. Труды Института системного программирования РАН. 2018. Т. 30. № 6. С. 171-198.

It is intuitively clear that the search for scientific publications often has many characteristics of a research search. The purpose of this paper is to formalize this intuitive understanding, explore which research tasks of scientists can be attributed to research search, what approaches exist to solve a research search problem in general, and how they are implemented in specialized search engines for scientists. We researched existing works regarding information seeking behavior of scientists and the special variant of a search called exploratory search. There are several types of search typical for scientists, and we showed that most of them are exploratory. Exploratory search is different to information retrieval and demands special support from search systems. We analyzed seventeen actual search systems for academicians (from Google Scholar, Scopus and Web of Science to ResearchGate) from the exploratory search support aspect. We found that most of them didn’t go far from simple information retrieval and there is a room for further improvements especially in the collaborative search support.

Added: Mar 3, 2019
Article
Лукашевич Н. В., Алексеев А. Труды Института системного программирования РАН. 2012. Т. 23. С. 257-276.
Added: Jan 30, 2013
Article
Кулямин В. В., Петренко А. К., Хорошилов А. В. Труды Института системного программирования РАН. 2018. Т. 30. № 6. С. 367-382.
Added: Feb 14, 2019