The paper focuses on estimating maturity of performance management systems, which are considered according to one of the possible treatments, as systems for information support of corporate governance and strategic management. Such systems are focused into the tasks of gathering, storage, analytical processing and presentation of information, which is critical for organizations’ information transparency and strategic decision making performed by external and internal stakeholders. The purpose of this paper is to advance an approach to evaluating maturity of such systems. For this purpose, we have considered existing approaches to maturity evaluation for management and information systems, formulated general principles of maturity evaluation, and developed methodological recommendations in the field of performance management systems maturity evaluation. The proposed approach to evaluating performance management system maturity relies on its hierarchical conceptual model that includes such elements as functional blocks, functional modules and analytical functions. In this case a ‘bottom-up’ principle is applicable: evaluation of higher level elements maturity (up to the system as a whole) is performed relying on estimates of subordinated lower level elements. At that, every element is estimated from viewpoints of data processing methods and processes, information systems, personnel, as well as integration with complementary elements, data quality, effectiveness and governance. Advisability of maturity evaluation in dynamics, as well as comparison with certain target levels is also justified.
The study identifies operational risks within service-oriented architecture (SOA) of information systems. As a part of operational risks a new error classification scheme is proposed for SOA applications. It is based on errors of the information systems which are service providers for application with service-oriented architecture. The proposed classification approach was used to classify system errors from two different enterprises (oil and gas industry, metal and mining industry). Besides we conducted a research to identify possible losses from operational risks and estimated losses for each error group per day.
In this paper, we propose and implement a method for detecting intersecting and nested communities in graphs of interacting objects of different natures. For this, two classical algorithms are taken: a hierarchical agglomerate and one based on the search for k-cliques. The combined algorithm presented is based on their consistent application. In addition, parametric options are developed that are responsible for actions with communities whose sizes are smaller than the given k, and also with single vertices. Varying these parameters allows us to take into account differences in the topology of the original graph and thus to correct the algorithm. The testing was carried out on real data, including on a group of graphs of a social network, and the qualitative content of the resulting partition was investigated. To assess the differences between the integrated method and the classical algorithms of community detections, a common measure of similarity was used. As a result, it is clearly shown that the resulting partitions are significantly different. We found that for the approach proposed in the article the index of the numerical characteristic of the partitioning accuracy, modularity, can be lower than the corresponding value for other approaches. At the same time, the result of an integrated method is often more informative due to intersections and nested community structure. A visualization of the partition obtained for one of the examples by an integrated method at the first and last steps is presented. Along with the successfully found set of parameters of the integrated method for small communities and cut offvertices in the case of social networks, some shortcomings of the proposed model are noted. Proposals are made to develop this approach by using a set of parametric algorithms.
For an comprehensive evaluation of individual work of software developers this paper suggests the method of assigning priorities. It also describes approaches to factors which affect the assessment of work of software developers, ranging of these factors and ranging of software developers by these factors. The suggested methodology can be used for making management accounts regulations in order to perfect the morale and material encouragement of personal subject to strategic and tactical goals of an organization.
The concept of innovative development of the regional mass-media complex (RMMC) in the conditions of information society is proposed and substantiates. Conditions of formation and development of the regional information-communication mass-media space (ICMMS) are studied. The task of creating of the information-analytical web-portal of RMMC as a key system-forming element of innovative infrastructure of ICMMS is formulated. The methodological framework of construction of intellectual system of management of innovative development RMMC portal-based RMMC is considered.
Architecture and components are offered to design an intellectual system of situational awareness and security for transportation and railroads infrastructure are offered. The concept provides event-based streams processing approach to process the primary data and build a grid application of collecting and analysing railroads data from various sources. An architecture is proposed to support predictive analytics and decision support in real-time on very large data volumes as well as an approach to dynamically optimize the timetables on the railway infrastructure networks.
Many semantic text analysis problems employ string-to-text relevance measures. Research paper annotation problem is no exception. In general, research papers are annotated according to a system of topics, organized as a taxonomy, a hierarchy of topics (or concepts). For example the papers, published in journals of the international Association of Computing Machinery (ACM), the most influential organization in the Computer Science world, are annotated according to the Computing Classification System taxonomy (ACM CCS). String-to-text relevance measures should be used to automate the research paper annotation procedure since taxonomy topics are strings ant research papers or any of their constituents are texts. A relevance measure maps a string–text pair to a real number. The meaning of the mapping depends on the relevance model under consideration. Under any model, the higher the relevance value, the stronger the association between the string and the text. This paper explores the use of phrase-to-text relevance measures to annotate research papers in Computer Science by key phrases taken from the ACM Computing Classification System. Three phrase-to-text relevance measures are experimentally compared in this setting. The measures are: (a) cosine relevance score between conventional vector space representations of the texts coded with tf-idf weighting; (b) a popular characteristic of the probability of “elite” term generation BM25; and (c) a characteristic of the symbol conditional probability averaged over matching fragments in suffix trees representing texts and phrases, CPAMF, introduced by the authors. Our experiment is conducted over a set of texts published in journals of the ACM and manually annotated by their authors using topics from the ACM CCS. Applying any of the relevance measures to an article results in a list of taxonomy topics sorted in the descending order of their relevance values. The results are evaluated by comparing these sorted lists and lists of topics assigned to articles manually. The higher a manually assigned topic is placed in a relevance based sorted list of topics, the more accurate the sorted list is. The accuracy of the computational annotations is scored by using three different scoring functions: a) MAP, b) nDCG, c) Intersection at k, where (a) and (b) are taken from the literature, and (c) is introduced by the authors. It appears, CPAMF outperforms both the cosine measure and BM25 by a wide margin over all three scoring functions.
The article analyzes the methods and provides a solution to the problem of detecting logical contradictions in business process models of the health care company. Practical purpose of solving the problem is to increase the efficiency of data management for municipal agencies as stakeholder of company. The methodology is based on formal tools relational logic formalism and the methodology involved is describing business processes by DEMO paradigm. Essentially used the simulator MIT Alloy Analyzer. Analyzed business processes specific organization, provides guidance on the elimination of contradictions.
The method of the decision of distinction of digraphs problem is offered. The basis of this method is defined on matrix model of complexity, which takes into account the quantitative and qualitative characteristics of digraph fragments. The model for the first time allows to calculate the importance of each digraph fragment of digraph in its total complexity. The results of the decision of problems of distinction and definition of similarity for digraphs are given.
The paper describes a new method of constructing semantic expansions of search requests (of generalized character) for improving the results of Web search. This method is based on the theory of K-representations - a new theory of designing semantic-syntactic analyzers of natural language texts with the broad use of formal means for representing input, intermediary, and output data. The stated approach is implemented with the help of the programming language «Java»: an experimental search system AOS (Aspect Oriented Search) has been developed.