• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 135
Sort:
by name
by year
Article
С.С.Гайсарян, Самоваров О. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 403-420.
Added: Sep 13, 2016
Article
Клеменков П. А., Кузнецов С. Д. Труды Института системного программирования РАН. 2012. Т. 23. С. 143-158.

Big data challenged traditional storage and analysis systems in several new ways. In this paper we try to figure out how to overcome this challenges, why it's not possible to make it efficiently and describe three modern approaches to big data handling: NoSQL, MapReduce and real-time stream processing. The first section of the paper is the introduction. The second section discuss main issues of Big Data: volume, diversity, velocity, and value. The third section describes different approaches to solving the problem of Big Data. Traditionally one might use a relational DBMS. The paper propose some steps that allow to continue RDBMS using when it’s capacity becomes not enough. Another way is to use a NoSQL approach. The basic ideas of the NoSQL approach are: simplification, high throughput, and unlimited scaling out. Different kinds of NoSQL stores allow to use such systems in different applications of Big Data. MapReduce and it’s free implementation Hadoop may be used to provide scaling out Big Data analytics. Finally, several data management products support real time stream processing under Big Data. The paper briefly overviews these products. The final section of the paper is the conclusion.

Added: Oct 31, 2017
Article
Кулямин В. В., Лаврищева Е., Мутилин В. и др. Труды Института системного программирования РАН. 2016. Т. 28. № 3. С. 189-208.
Added: Aug 28, 2017
Article
Кошкарев А., Медведев А., Вишняков Ю. и др. Труды Института системного программирования РАН. 2012. Т. 23. С. 245-256.
Added: Jan 26, 2016
Article
Татарников А. Д., Камкин А. С., Проценко А. С. и др. Труды Института системного программирования РАН. 2016. Т. 28. № 6. С. 87-102.

ARM is a family of microprocessor instruction set architectures developed in a company with the same name. The newest architecture of this family, ARMv8, contains a large number of instructions of various types and is notable for its complex organization of virtual memory, which includes hardware support for multilevel address translation and virtualization. All of this makes functional verification of microprocessors with this architecture an extremely difficult technical task. An integral part of microprocessor verification is generation of test programs, i.e. programs in the assembly language, which cause various situations (exceptions, pipeline stalls, branch mispredictions, data evictions in caches, etc.). The article describes the requirements for industrial test program generators and presents a generator for microprocessors with the ARMv8 architecture, which has been developed with the help of MicroTESK (Microprocessor TEsting and Specification Kit). The generator supports an instruction subset typical for mobile applications (about 400 instructions) and consists of two main parts: (1) an architecture-independent core and (2) formal specifications of ARMv8 or, more precisely, a model automatically constructed on the basis of the formal specifications. With such a structure, the process of test program generator development consists mainly in creation of formal specifications, which saves efforts by reusing architecture-independent components. An architecture is described using the nML and mmuSL languages. The first one allows describing the microprocessor registers and syntax and semantics of the instructions. The second one is used to specify the memory subsystem organization (address spaces, various buffers and tables, address translation algorithms, etc.) The article describes characteristics of the developed generator and gives a comparison with the existing analogs.

Added: Nov 24, 2017
Article
Камкин А. С., Чупилко М., Лебедев М. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 4. С. 247-256.

Hardware testing is a process aimed at detecting manufacturing faults in integrated circuits. To measure test quality, two main metrics are in use: fault detection abilities (fault coverage) and test application time (test length). Many algorithms have been suggested for test generation; however, no scalable solution exists. In this paper, we analyze applicability of functional tests generated from high-level models for low-level manufacturing testing. A particular test generation method is considered. The input information is an HDL description. The key steps of the method are system model construction and coverage model construction. Both models are automatically extracted from the given description. The system model is a representation of the design in the form of high-level decision diagrams. The coverage model is a set of LTL formulae defining reachability conditions for the transitions of the extended finite state machine. The models are translated into the input format of a model checker. For each coverage model formula the model checker generates a counterexample, i.e. an execution that violates the formula (makes the corresponding transition to fire). The approach is intended for covering of all possible execution paths of the input HDL description and detecting dead code. Experimental comparison with the existing analogues has shown that it produces shorter tests, but they achieve lower stuck-at fault coverage comparing with the dedicated approach. An improvement has been proposed to overcome the issue. 

Added: Nov 6, 2017
Article
Захаров В. А., Новикова Т. А. Труды Института системного программирования РАН. 2014. Т. 26. № 2. С. 245-268.
It is generally accepted that to unify a pair of substitutions θ_1 and θ_2 means to find out a pair of substitutions η' and η'' such that the compositions θ_1 η' and θ_2 η'' are the same. Actually, unification is the problem of solving linear equations of the form θ_1 X=θ_2 Y in the semigroup of substitutions. But some other linear equations on substitutions may be also viewed as less common variants of unification problem. In this paper we introduce a two-sided unification as the process of bringing a given substitution θ_1 to another given substitution θ_2 from both sides by giving a solution to an equation Xθ_1 Y=θ_2. Two-sided unification finds some applications in software refactoring as a means for extracting instances of library subroutines in arbitrary pieces of program code. In this paper we study the complexity of two-sided unification for substitutions and programs. In Section 1 we discuss the concept of unification on substitutions and programs, outline the recent results on program equivalence checking that involve the concept of unification and anti-unification, and show that the problem of library subroutine extraction may be viewed as the problem of two-sided unification for programs. In Section 2 we formally define the concept of two-sided unification on substitutions and show that the problem of two-unifiability is NP-complete (in contrast to P-completeness of one-sided unifiability problem). NP-hardness of two-sided unifiability problem is established in Section 4 by reducing to it the bounded tiling problem which is known to be NP-complete; the latter problem is formally defined in Section 3. In Section 5 we define two-sided unification of sequential programs in the first-order model of programs supplied with strong equivalence (logic&term equivalence) of programs and proved that this problem is NP-complete. This is the main result of the paper.
Added: Sep 30, 2015
Article
Монаков А., Платонов В., Аветисян А. И. Труды Института системного программирования РАН. 2014. Т. 26. № 1. С. 357-374.

The article proposes methods for supporting development of efficient programs for modern parallel architectures, including hybrid systems. First, specialized profiling methods designed for programmers tasked with parallelizing existing code are proposed. The first method is loop-based profiling via source-level instrumentation done with Coccinelle tool. The second method is memory reuse distance estimation via virtual memory protection mechanism and manual instrumentation. The third method is cache miss and false sharing estimation by collecting a partial trace of memory accesses using compiler instrumentation and estimating cache behavior in postprocessing based on the trace and a cache model. Second, the problem of automatic parallel code generation for hybrid architectures is discussed. Our approach is to generate OpenCL code from parallel loop nests based on GRAPHITE infrastructure in the GCC compiler. Finally, in cases where achieving high efficiency on hybrid systems requires significant rework of data structures or algorithms, one can employ auto-tuning to specialize for specific input data and hardware at run time. This is demonstrated on the problem of optimizing sparse matrix-vector multiplication for GPUs and its use for accelerating linear system solving in OpenFOAM CFD package. We propose a variant of “sliced ELLPACK” sparse matrix storage format with special treatment for small horizontal or diagonal blocks, where the exact parameters of matrix structure and GPU kernel launch should be automatically tuned at runtime for the specific matrix and GPU hardware.

Added: Mar 22, 2017
Article
Силаков Д. В. Труды Института системного программирования РАН. 2011. Т. 20. С. 25-36.
Added: Sep 22, 2015
Article
Маллачиев К., Пакулин Н. В., Хорошилов А. В. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 4. С. 283-294.
Added: Nov 9, 2017
Article
Недумов Я. Р., Кузнецов С. Д. Труды Института системного программирования РАН. 2018. Т. 30. № 6. С. 171-198.

It is intuitively clear that the search for scientific publications often has many characteristics of a research search. The purpose of this paper is to formalize this intuitive understanding, explore which research tasks of scientists can be attributed to research search, what approaches exist to solve a research search problem in general, and how they are implemented in specialized search engines for scientists. We researched existing works regarding information seeking behavior of scientists and the special variant of a search called exploratory search. There are several types of search typical for scientists, and we showed that most of them are exploratory. Exploratory search is different to information retrieval and demands special support from search systems. We analyzed seventeen actual search systems for academicians (from Google Scholar, Scopus and Web of Science to ResearchGate) from the exploratory search support aspect. We found that most of them didn’t go far from simple information retrieval and there is a room for further improvements especially in the collaborative search support.

Added: Mar 3, 2019
Article
Лукашевич Н. В., Алексеев А. Труды Института системного программирования РАН. 2012. Т. 23. С. 257-276.
Added: Jan 30, 2013
Article
Кулямин В. В., Петренко А. К., Хорошилов А. В. Труды Института системного программирования РАН. 2018. Т. 30. № 6. С. 367-382.
Added: Feb 14, 2019
Article
Зеленова С., Зеленов С. В. Труды Института системного программирования РАН. 2017. Т. 29. № 6. С. 183-202.
Added: Feb 12, 2018
Article
Евтушенко Н. В., Ермаков А. Труды Института системного программирования РАН. 2016. Т. 28. № 3. С. 123-144.
Added: Aug 10, 2018
Article
Tvardovskii A., Евтушенко Н. В. Труды Института системного программирования РАН. 2018. Т. 30. № 4. С. 139-154.
Added: Oct 9, 2019
Article
Аничкин А., Семенов В. А. Труды Института системного программирования РАН. 2017. Т. 29. № 2. С. 231-256.
Added: Dec 12, 2018
Article
Нурмухаметов А., Жаботинский Е., Курмангалеев Ш. и др. Труды Института системного программирования РАН. 2017. Т. 29. № 6. С. 163-182.
Added: Feb 12, 2018
Article
Гомзин А. Г., С.Д. Кузнецов Труды Института системного программирования РАН. 2016. Т. 6. С. 171-184.

The work is devoted to methods of social network users’ age detection. Social networks allow users to fill their profiles that may contain an age. Profiles are not fully filled, so the task of unknown attributes detection arises. Explicit and predicted values are used in recommender and marketing systems. Moreover, the predicted values can be used for determining online communities’ demographic profiles and for inferring the target audience of marketing campaigns in the Internet. In this paper a method for predicting unfilled age values is proposed. The method uses the following information available from the social network: explicit users’ ages and social graph. The graph contains nodes representing users and communities. Community is the special page in the Internet that unites users on interests. Friendship relations between users and subscriptions of users on communities represented as edges of the social graph. The method is based on the label propagation in the friendship and subscription graphs. Ages of the users are representd by labels that are propagated in the graph. The scheme of the algorithm is following: initialize user labels according to explicit profiles; build vector model that contains distributions of the neighbours’ ages grouped by user age; compute weights of users and communities, propagate labels to communities; build vector model considering calculated weights; propagate labels to users that have not filled their age in the profile. The paper describes the algorithm and contains experimantal results showing that friendship relations are more useful for age prediction in the social network than communities.

Added: Jan 25, 2018
Article
Аветисян А. И., Богомолов И., Алексиянц А. В. и др. Труды Института системного программирования РАН. 2015. Т. 27. № 5. С. 49-58.

Nowadays OpenStack platform is a leading solution in cloud computing field. Keystone, the OpenStack Identity Service, is one of its major components. This service is responsible for authentication and authorization of users and services of the system. Keystone is a high-load service since all interactions between services happen through it. This leads us to the following problem: increasing the number of cloud service users results to significant load growth. In this paper we demonstrate the problem of Keystone performance degradation during constant load. In order to find source of the problem we have tested Keystone with different backends (PostgreSQL, MariaDB), frontends (Apache2, ngnix) and keeping the database on different hardware (HDD, SSD and tmpfs on RAM). Using different configuration sets is necessary because it helps us to narrow the amount of possible reasons causing degradation. Tests were conducted with Rally. As a result, in all test cases we have seen inadequate quick degradation under relatively light load. We have also implemented a mock service which represents the simplest Keystone tasks. Our service turned out to be much faster than Keystone. The problem with Keystone might be related to either its internal logic implementation or incorrect interaction with other components; it is the subject of further research.

Added: Mar 22, 2017
Article
Лаврищева Е. М., Зеленов С. В., Пакулин Н. В. Труды Института системного программирования РАН. 2019. Т. 31. № 5. С. 95-108.
Added: Mar 11, 2020