This paper describes RPM5 — a promising package management toolset aimed to manage installation, update and removal of software in the Linux operating system. The toolset uses contemporary functionality of modern libraries and hardware in order to improve ease of software handling for both developers and users.
The article is devoted to a brief analysis of the main provisions of the initiative SEMAT — Software Engineering Method and Theory on the reconstruction of a unified theory of software engineering. The initiative developed by the OMG standard called Essence that defines the kernel and language for software engineering. The core represents the minimum number of entities involved in the process of creating a software system or are its result, and it introduces the relationship between these entities. Explain the main causes of the effectiveness of the proposed theory.
The article describes the results of the implementation of test data generation tools, by dividing the values of the input parameters of the system into equivalence classes to automate the complex functional testing of information XML-messages driven systems.
This article discusses comparative testing performance results of basic operations on documents in a Repository (IBM FileNet, Alfresco) for CMIS realization in comparison with Native API one. Some test results produce a great performance reduction in the case of CMIS using. The practical approaches of bounds definition in use of CMIS are determined.
The article topic is describing of the object-attribute approach (OAA) for computation organization in dataflow computing system (CS) for language systems creation (compilation, interpretation, semantic analysis) on the example of programming language creation for program model of dataflow supercomputer system. Is shown that the offered approach has advantages over the text analysis by means of the final state machine: widely functionality; possibility of calculations parallelization and realization on distributed CS; lack of a semantic gap between a programming language and the computer language, peculiar to modern VS with controlflow-architecture VS. Also possible application of OA-approach for natural language semantic analysis is described.
The problem of efficient use of resources of multiprocessor computer systems (AIM) architecture MPP (Massively Parallel Processing) for the settlement of specific parallel algorithms. It is proposed integrated indicator of the efficiency of the parallel program execution on a specific MBC, which takes into account both the hardware performance of the system and the quality of program parallelism.
Computer systems with high level of security require a formal proof of security in the framework of some mathematical models. There exists a sufficiently large number of such models; most of them have either a graph nature or an automata nature. In some models security is decidable in all cases, however there exist examples in which security is undecidable, so there emerges a need in additional constraints. Another problem consists in mutual expressibility of different security models (e.g. in case of a merge of two systems into one). A possible way of such unification is embedding one system into another. Embedding is a mapping that satisfies three properties: injectivity, preserving security/insecurity and preserving functionality. Our research is focused on Concept-Based Access Control (CBAC) model introduced by Afonin and Bonushkina in 2019. This is a graph model with undecidable security. We constructively show that two classical security models, namely take-grant and noninterference models, can be embedded in CBAC, and complexity of security validation in original systems and in CBAC images has the same order. Thus, CBAC is rich enough to naturally reflect properties of both graph-based models and automata-based models. Since security is decidable in take-grant and noninterference, embeddings produce two new subclasses of CBAC systems with decidable security.
In this article we consider a problem of automated detection of mobile malware applications and propose a method based on dynamic analysis of the application code. The proposed method is based on building dynamic model of the application represented by graph model of a special kind consisting of vertices representing the application states and edges representing transitions between states marked with “input”-“reaction” pairs. Input can be some user action or system event and reaction is execution of some API calls or actions sequence. Built models are compared with basic malware models which are preliminary obtained from malware collection using hierarchical clastering. The results of model comparison together with some other characteristics including API call related information form feature vectors which are then used in classification with machine learning algorithms. The best classification results were obtained using gradient boosting algorithm — 85% of the malicious applications from the test set were classified correctly while false alarms rate on real applications from Google Play resulted in 0,5%. The proposed method suits for the usage as one of the automated checks on application marketplace side, it can also be used in corporate systems of Mobile Device Management class. Built models have a special value and might be used as auxiliary structures for manual analysis of the suspicious applications.
The article discusses the key problematic issues of requirements engineering in relation to the projects implemented by industrial enterprises. The enterprise that leads the development, implementation, operation and maintenance processes throughout complex technical systems life cycle is considered. The vital need to state requirements engineering processes at a modern level within the framework of production cooperation is shown in the context of ensuring the integrity and consistency of project activities. The main problems of the absence of mature requirements engineering throughout the life cycle of a complex technical system are discussed. The estimation of the scope of such problems for the aviation industry as an example is given. It is shown that the key success factor to solve these problems is the creation of subdivisions responsible for the forming of various types of requirements and maintaining their integrity and traceability throughout the life cycle of a complex technical system, as well as complex automation of these activities. The objectives, results and content of the basic processes of requirements engineering as well as the modern standards and best practices in this area are discussed. The method for the eliciting, managing and specifi cation of requirements during the life cycle of the technical system is proposed in that context. It is shown that it is possible to ensure two-way traceability between the most important entities: the hierarchies of functional requirements and system requirements, the system architecture and the identifi ed confi guration objects when using this method.
Integration issues of two software products Metasonic Suite and Alfresco are regarded at the example of subject-oriented approach in service registry development and attendant business-process.
Various documentation types accompanying the software and the documentation life cycle are considered in the article. The documentation tool requirements are described for different document types. The documentation systems of two state-of-the-art types - CMS and Wiki - are considered, the usage cost estimation for standalone and hosted versions ofthese systems are compared in the article.
At the beginning of the paper, it is demonstrated that the technology of the most widely used SQL-oriented DBMS is inextricably linked with HDD technology. Features of HDD affect the data structures and algorithms for performing operations, methods of managing the buffer pool of the DBMS, transaction management, query optimization, etc. An alternative to a disk DBMS is an in-memory DBMS, storing databases entirely in the main memory. Despite the fact that in-memory DBMS has a number of advantages over disk DBMS, at present there is practically no competition. This, first of all, is due to natural limitations on the size of databases, inherent in in-memory DBMS. At present, new types of data storage hardware have appeared: SSD — block solid-state drives and SCM — storage-class memory (non-volatile main memory). SSD characteristics made it expedient to develop a DBMS in terms of their exclusive use, so far no such DBMS has been created, and SSDs are used simply instead of HDDs in DBMS that do not take their features into account. The availability of SCM allows one to radically simplify the architecture of the database and significantly improve their performance. To do this, you need to review many of the ideas used in disk-based databases.
We report approach for the generation of parallel uncorrelated streams of preudorandom numbers. We apply our method to the number of modern and reliable pseudorandom number generators and develop particular algorithms for initialization of parallel streams. Particularly, each of our GPGPU realizations can produce exactly the same output sequence as the original algorithm.