• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 51
Sort:
by name
by year
Article
Zakharov V.A., Volkanov D. Y., Zorin D. et al. Programming and Computer Software. 2015. Vol. 41. No. 6. P. 325-335.

Checking the correctness of distributed systems is one of the most difficult and urgent problems in software engineering. A combined toolset for the verification of real-time distributed systems (RTDS) is described. RTDSs are specified as statecharts in the Universal Modeling Language (UML). The semantics of statecharts is defined by means of hierarchical timed automata. The combined toolset consists of a UML statechart editor, a verification tool for model checking networks of real-time automata in UPPAAL, and a translator of UML statecharts into networks of timed automata. The focus is on the translation algorithm from UML statecharts into networks of hierarchical timed automata. To illustrate the proposed approach to the verification of RTDSs, a toy example of a real-time crossroad traffic control system is analyzed.

Added: Oct 13, 2015
Article
Turlapov V. E., Юсов Е. Programming and Computer Software. 2008. Vol. 34. No. 5. P. 245-256.
Added: Nov 1, 2008
Article
Kalenkova A.A. Programming and Computer Software. 2012. Vol. 38. No. 1. P. 43-56.

This paper considers an algorithm of automatic workflow optimization that, unlike well-known redesign algorithms for workflows [1, 2], can analyze arbitrary structures containing conditional branches and cycles. This algorithm operates with workflows without structural conflicts and, in the course of operation, uses execution conditions obtained as a result of application of the Boolean verification algorithm (BVA) proposed earlier in [3]. A modified BVA is proposed and its computational complexity is estimated.

Added: Jul 13, 2013
Article
S. S. Gaissaryan, Nurmukhametov A., Kurmangaleev S. et al. Programming and Computer Software. 2015. Vol. 41. No. 4. P. 231-236.

Software vulnerabilities are a serious threat for security of information systems. Any software writ ten in C/C++ contain considerable amount of vulnerabilities. Some of them can be used by attackers to seize control of the system. In this paper, for counteracting such vulnerabilities, we propose to use compiler trans formations: function reordering by permutation within a module, insertion of additional local variables into the function's stack, local variables hashing on the stack. By means of these transformations, it is suggested to generate a diversified population of executable files of the application being compiled. Such an approach, for example, complicates planning of the ROP attacks on the entire population. Having obtained a single exe cutable file, the attacker can create an ROP exploit, which works only for this version of the application. The other executable files of the population will remain insensitive to this attack.

Added: Sep 8, 2016
Article
Kalenkova A. Programming and Computer Software. 2010. Vol. 36. No. 5. P. 276-288.
Added: Jul 13, 2013
Article
Petrovskiy M., Korolev V., Korchagin A. et al. Programming and Computer Software. 2018. Vol. 44. No. 5. P. 353-362.
Added: Dec 5, 2018
Article
Dworzanski L. W., Lomazova I. A. Programming and Computer Software. 2016. Vol. 42. No. 5. P. 292-306.

Multi-level multi-agent systems (MASs) with dynamic structure are widely used in solving important applied problems in telecommunication, transportation, social, and other systems. Therefore, ensuring correct behavior of such systems is an actual and important task. One of the most error-prone stages of system development in the framework of model-oriented approach is the implementation stage, in the course of which a program code is constructed based on the model developed. This paper presents an algorithm for automated translation of MAS models represented as nested Petri nets into systems of distributed components. Nested Petri nets are the extension of Petri nets in the framework of the nets-within-nets approach, which assumes that tokens in a Petri net may themselves be Petri nets, possess autonomous behavior, and interact with other tokens of the net. This makes it possible to model MASs with dynamic structure in a natural way. The translation presented in this paper preserves distribution level and important behavioral properties (safety, liveness, and conditional liveness) of the original model and ensures fairness of the target system execution. The use of such translation makes it possible to automate construction of distributed MASs by models of nested Petri nets. As a test example, translation of nested Petri nets into systems of distributed components was implemented on the basis of the EJB component technology.

Added: Sep 27, 2016
Article
Devyanin P., Kuliamin V., Petrenko A. K. et al. Programming and Computer Software. 2016. Vol. 42. No. 4. P. 198-205.

ecomposition is an important phase in the design of medium and large-scale systems. Various architectures of software systems and decomposition methods are studied in numerous publications. Presently, formal specifications of software systems are mainly used for experimental purposes; for this reason, their size and complexity are relatively low. As a result, in the development of a nontrivial specification, different approaches to the decomposition should be compared and the most suitable approach should be chosen. In this paper, the experience gained in the deductive verification of the formal specification of the mandatory entity-role model of access and information flows control in Linux (MROSL DP-model) using the formal Event-B method and stepwise refinement technique is analyzed. Two approaches to the refinementbased decomposition of specifications are compared and the sources and features of the complexity of the architecture of the model are investigated.

Added: Sep 12, 2016
Article
Khoroshilov A. V., Мандрыкин М. У., Мутилин В. С. et al. Programming and Computer Software. 2015. Vol. 41. No. 1. P. 49-64.

An operating system (OS) kernel is a critical software regarding to reliability and efficiency. Quality of modern OS kernels is already high enough. However, this is not the case for kernel modules, like, for example, device drivers that, due to various reasons, have a significantly lower level of quality. One of the most critical and widespread bugs in kernel modules are violations of rules for correct usage of a kernel API. One can find all such violations in modules or can prove their correctness using static verification tools that need contract specifications describing obligations of a kernel and modules relative to each other. This paper considers present methods and toolsets for static verification of kernel modules for different OSs. A new method for static verification of Linux kernel modules is proposed. This method allows one to configure the verification process at all its stages. It is shown how it can be adapted for checking kernel components of other OSs. An architecture of a configurable toolset for static verification of Linux kernel modules that implements the proposed method is described, and results of its practical application are presented. Directions for further development of the proposed method are discussed in conclusion.

Added: Nov 1, 2015
Article
Get’man I., Ivannikov V., Markin Y. et al. Programming and Computer Software. 2016. Vol. 42. No. 5. P. 316-322.

This paper proposes a new object model of data for the in-depth analysis of network traffic. In contrast to the model used by most modern network analyzers (for example, Wireshark and Snort), the proposed model supports data stream reassembling with subsequent parsing. The model also provides a convenient universal mechanism for binding parsers, thus making it possible to develop completely independent parsers. Moreover, the proposed model allows processing modified—compressed or encrypted—data. This model forms the basis of the infrastructure for the in-depth analysis of network traffic.

Added: Nov 13, 2016
Article
Getman A., Ivannikov V., Markin Y. et al. Programming and Computer Software. 2016. Vol. 42. No. 5. P. 316-323.

This paper proposes a new object model of data for the in-depth analysis of network traffic. In contrast to the model used by most modern network analyzers (for example, Wireshark and Snort), the proposed model supports data stream reassembling with subsequent parsing. The model also provides a convenient universal mechanism for binding parsers, thus making it possible to develop completely independent parsers. Moreover, the proposed model allows processing modified—compressed or encrypted—data. This model forms the basis of the infrastructure for the in-depth analysis of network traffic.

Added: Sep 6, 2019
Article
Luciv D., Koznov D., Terekhov A. et al. Programming and Computer Software. 2018. Vol. 44. No. 5. P. 335-343.

Contemporary software documentation is as complicated as the software itself. During its lifecycle, the documentation accumulates a lot of “near duplicate” fragments, i.e. chunks of text that were copied from a single source and were later modified in different ways. Such near duplicates decrease documentation quality and thus hamper its further utilization. At the same time, they are hard to detect manually due to their fuzzy nature. In this paper we give a formal definition of near duplicates and present an algorithm for their detection in software documents. This algorithm is based on the exact software clone detection approach: the software clone detection tool Clone Miner was adapted to detect exact duplicates in documents. Then, our algorithm uses these exact duplicates to construct near ones. We evaluate the proposed algorithm using the documentation of 19 open source and commercial projects. Our evaluation is very comprehensive – it covers various documentation types: design and requirement specifications, programming guides and API documentation, user manuals. Overall, the evaluation shows that all kinds of software documentation contain a significant number of both exact and near duplicates. Next, we report on the performed manual analysis of the detected near duplicates for the Linux Kernel Documentation. We present both quantative and qualitative results of this analysis, demonstrate algorithm strengths and weaknesses, and discuss the benefits of duplicate management in software documents.

Added: Mar 1, 2019
Article
Barash L., Guskova M., Shchur L. Programming and Computer Software. 2017. Vol. 43. No. 3. P. 145-160.

By the example of the RNGAVXLIB random number generator library, this paper considers some approaches to employing AVX vectorization for calculation speedup. The RNGAVXLIB library contains AVX implementations of modern generators and the routines allowing one to initialize up to 10^19 independent ran-dom number streams. The AVX implementations yield exactly the same pseudorandom sequences as the orig-inal algorithms do, while being up to 40 times faster than the ANSI C implementations.

Added: Mar 24, 2017
Article
Lebedev P. A. Programming and Computer Software. 2016. Vol. 42. No. 3. P. 142-154.

In this work we discuss usage of expression templates that takes into account modern C++ language standard features. We define the corresponding evaluation model that reflects properties of implementation functions. We present the algorithm that optimally evaluates given expression templates and prove this property. In conclusion, we list implementation techniques that are needed in implementation of this algorithm on modern compilers.

Added: Nov 25, 2014
Article
Ефимов В. Ю., Батузов К. А., Падарян В. А. et al. Programming and Computer Software. 2016. Vol. 42. No. 3. P. 174-186.

A technology of the deterministic replay of an execution process in virtual machines can be used for debugging, improving reliability and robustness, software development and incident investigation (including reverse engineering of malware). The paper describes an implementation of deterministic replay for guest machines based on IA-32 in the emulator QEMU. This implementation minimizes the list of replayed devices. The organization of QEMU is discussed in detail, and the techniques used in the implementation are thoroughly explained. The key performance characteristics, such as the size of log of nondeterministic events and slowdown are experimentally measured.

Added: Mar 27, 2017
Article
A.I. Zobnin. Programming and Computer Software. 2010. Vol. 36. No. 2. P. 75-82.

This survey paper presents general approach to the wellknown F5 algorithm for calculating Gröbner bases, which was created by Faugère in 2002.

Added: Oct 1, 2014
Article
Khoroshilov A. V., Мандрыкин М. У. Programming and Computer Software. 2015. Vol. 41. No. 4. P. 197-207.

The paper presents a target analyzable language used for verification of real world production GNU C programs (Linux kernel modules). The language represents an extension of the existing intermediate language used by the Jessie plugin for the Frama C static analysis framework. Compared to the original Jessie, the extension is fully compatible with the C semantics of arrays, initially supports discriminated unions and prefix (hierarchical) structure pointer casts and provides a limited, but reasonable support for low level pointer casts (reinterpretations of the underlying bytes of memory). The paper describes the approaches to translation of the original C code into the analyzable intermediate language and of the intermediate language into Why3ML i.e. the input language of the Why3 deductive verification platform.

Added: Oct 30, 2015
Article
D. Yu. Turdakov, Pastukhov R., Korshunov A. et al. Programming and Computer Software. 2015. Vol. 41. No. 5. P. 302-306.

Graph partitioning is required for solving tasks on graphs that need to be distributed over disks or computers. This problem is well studied, but the majority of the results on this subject are not suitable for processing graphs with billions of nodes on commodity clusters, since they require shared memory or lowlatency messaging. One of the approaches suitable for cluster computing is the balanced label propagation, which is based on the label propagation algorithm. In this work, we show how multi-level optimization can be used to improve quality of the partitioning obtained by means of the balanced label propagation algorithm.

Added: Sep 13, 2016
Article
S. D. Kuznetsov, Turdakov D. Y., Пастухов Р. К. et al. Programming and Computer Software. 2015. Vol. 41. No. 5. P. 302-306.

Graph partitioning is required for solving tasks on graphs that need to be distributed over disks or computers. This problem is well studied, but the majority of the results on this subject are not suitable for processing graphs with billions of nodes on commodity clusters, since they require shared memory or lowlatency messaging. One of the approaches suitable for cluster computing is the balanced label propagation, which is based on the label propagation algorithm. In this work, we show how multi-level optimization can be used to improve quality of the partitioning obtained by means of the balanced label propagation algorithm.

Added: Jan 23, 2018
Article
Kuplyakov D., Shalnov E., Konushin A. Programming and Computer Software. 2017. Vol. 43. No. 4. P. 224-229.

The paper considers a problem of multiple person tracking. We present the algorithm to automatic people tracking on surveillance videos recorded by static cameras. Proposed algorithm is an extension of approach based on tracking-by-detection of people heads and data association using Markov chain Monte Carlo (MCMC). Short track fragments (tracklets) are built by local tracking of people heads. Tracklet postprocessing and accurate results interpolation were shown to reduce number of false positives. We use position deviations of tracklets and revised entry/exit points factor to separate pedestrians from false positives. The paper presents a new method to estimate body position, that increases precision of tracker. Finally, we switched HOG-based detector to cascade one. Our evaluation shows proposed modifications significantly increase tracking quality.

Added: Mar 14, 2018
Article
Denis Turdakov, Astrakhantsev N., Fedorenko D. Programming and Computer Software. 2015. Vol. 41. No. 6. P. 336-349.

Applications related to domain specific text processing often use glossaries and ontologies, and the main step of such resource construction is term recognition. This paper presents a survey of existing definitions of the term and its linguistic features, formulates the task definition for term recognition, and analyzes presently-available methods for automatic term recognition, such as methods for candidates collection, methods based on statistics and contexts of term occurrences, methods using topic models, and methods based on external resources (such as text collections from other domains, ontologies, and Wikipedia). This paper also provides an overview of standard methodologies and datasets for experimental research.

Added: Aug 26, 2016