A new hierarchical model of decision making under multiple criteria that form a multilevel system is proposed. With this model, one can use approaches and methods of the criteria importance theory to collect information about the importance of criteria and use it to analyze practical multicriterial decision making problems in a correct way.
This paper addresses the problem of designing an attribute-based search subsystem when integrating an identity management (IDM) system with modern complex systems (e.g., enterprise resource planning (ERP) systems) that have granular access control. When implementing integration solutions with a large number of roles and users, the context search preset in IDM systems proves to be inadequate. We propose and implement a solution to this problem that consists in changing the approach to the organization of a role search and description and present an approach to determining the optimal number of attributes required for an efficient search.
This article provides a comparative analysis of chains of alternatives that emerge in solving multicriteria choice problems by methods of the criteria importance theory and the even swap method and brings to light the essential difference between these chains.
A new approach to the multidimensional assessment of research organizations is introduced. Based on the use of quantitative data, various types of research and development results are considered in the approach.
The problems of developing computer systems that perform intellectual analysis of empirical data in fields with weakly formalized knowledge are described. The JSM system for analysis of nonquantitative sociological data is presented as an example of the implementation of such a system.
The need to transform existing algorithms in Big Data Systems is considered. The transformation must allow independent and parallel processing of separate fragments of data. The characteristic aspects of a well-organized intermediate compact form of information and its natural algebraic properties are studied and an illustrative example is provided.
This paper addresses the problem of transforming the optimal linear estimation procedure in such a way that separate fragments of initial data are processed individually and concurrently. A representation of intermediate information is proposed that allows an algorithm to concurrently extract this information from each initial data set, combine it, and use it for estimation. It is shown that, on an information space constructed, an ordering is induced that reflects the concept of information quality.
The major approaches to solving the classical problems of simulation and the models of time are considered, viz., discreteevent and continuous modeling, as well as MonteCarlo modeling. Their main propositions, advantages, shortcomings, and concrete realizations are discussed. On the basis of the conducted research, the place of the original software tool GIPS Ultimate is shown in a series of other software products for the solution of applied simulation problems.
The procedure of transition from a priori to a posteriori information for a linear experiment in the context of Big Data systems is considered. At first glance, this process is fundamentally sequential, namely: as a result of observation, a priori information is transformed into a posteriori information, which is later interpreted as a priori for the next observation, etc. It is shown that such a procedure can be parallelized and unified due to the transformation of both the measurement results and the original a priori information into some special type. The properties of various forms of information representation are studied and compared. This approach makes it possible to effectively scale the Bayesian estimation procedure and, thus, adapt it to the problems of processing large amounts of distributed data.