The paper proposes a new method for facilitating knowledge exchange by seeking relevant university experts for commenting actual information events arising in the open environment of a modern economical cluster. This method is based on a new mathematical model of ontology concepts matching. We propose to use in the formal core of our method a new modification of Latent Dirichlet allocation. The method and the mathematical model of ontology matching were validated in the form of a software-based solution: the newly designed decision support system titled EXPERTIZE. The system regularly monitors different text sources in the Internet, performs document analysis and provides university employees with critical information about relevant events according a developed matching algorithm. In the proposed solution we made several contributions to the advances of knowledge processing, including: new modifications of topic modeling method suitable for application in expert finding tasks, integration of new algorithms and existing ontology services to show feasibility of the solution.
The term “pattern” refers to a combination of values of some features such that objects with these feature values significantly differ from other objects. This concept is a useful tool for the analysis of behavior of objects in both statics and dynamics. If the panel data describing the functioning of objects in time is available, we can analyze pattern changing behavior of the objects and identify either well adapted to the environment objects or objects with unusual and alarming behavior.In this paper we apply static and dynamic pattern analysis to the analysis of innovative development of the Russian regions in the long run and obtain a classification of regions by the similarity of the internal structure of these indicators and groups of regions carrying out similar strategies.
In this paper we describe a methodology that allows researchers to measure empirically, in the form of well-defined indicators, the extent to which economic analysis and evidence is been applied in the enforcement of competition law, using data collected from the decisions of competition authorities. By mapping the value of these indicators to different legal standards, our methodology also allows one to identify the legal standards adopted in the assessment of different conducts that were investigated by the authorities. The policy implications of empirical work in this area are potentially very important, since the extent to which economic analysis is applied in the assessment of anti-competitive conduct by competition authorities may well influence the quality of this assessment (i.e. the quality of enforcing competition law, measured by the extent to which decision-errors and deterrence effects are minimised). Empirical analysis using the indicators can be used to undertake comparative analysis in different countries, to examine the extent to which authorities favour specific legal standards in the assessment of specific conducts and the way in which the judicial review process treats decisions depending on the legal standard used.
Creating a process model (PM) is a convenient means to depict the behavior of a particular information system. However, user behavior is not static and tends to change over time. In order for them to sustain relevant, PMs have to be adjusted to ever-changing behavior. Sometimes the existing PM may be of high value (e.g. it is well-structured, or has been developed continuously by experts to later work with), which makes the approach to create a brand-new model using discovery algorithms less preferable. In this case, a different and better suitable approach to adjust PM to new behavior is to work with an existing model through repairing only such PM fragments that do not fit the actual behavior stated in sub-log. This article is to present a method for efficient decomposition of PMs for their future repair. It aims to improve the accuracy of model repair. Unlike the ones introduced earlier, this algorithm suggests finding the minimum spanning tree of undirected graph’s vertices subset. It helps to reduce the size of a fragment to be repaired in a model and enhances the quality of a repaired model according to various conformance metrics.
The proposed model is intended to assessment of company's operation effectiveness, which is an important factor at investment decisions making. There are compared indicators of the growth rate, profitability and risk for shares placed on various stock exchanges with an assessment of the intrinsic value and management efficiency of company. The information received is useful for investors and company managers for operating on stock markets.
We construct a mathematical model of anti-virus protection of local area networks. The model belongs to the class of regenerative processes. To protect the network from the external attacks of viruses and the spread of viruses within the network we apply two methods: updating antivirus signatures and reinstallings of operating systems (OS). Operating systems are reinstalled in the case of failure of any of the computers (non- scheduled emergent reinstalling) or at scheduled time moments. We consider a maximization problem of an average unit income. The cumulative distribution function (CDF) of the scheduled intervals between complete OS reinstallings is considered as a control. We prove that the optimal CDF has to be degenerate, ie, it is localized at a point τ τ
One of the approaches for the nearest neighbor search problem is to build a network which nodes correspond to the given set of indexed objects. In this case the search of the closest object can be thought as a search of a node in a network. A procedure in a network is called decentralized if it uses only local information about visited nodes and its neighbors. Networks, which structure allows efficient performing the nearest neighbor search by a decentralized search procedure started from any node, are of particular interest especially for pure distributed systems. Several algorithms that construct such networks have been proposed in literature. However, the following questions arise: “Are there network models in which decentralized search can be performed faster?”; “What are the optimal networks for the decentralized search?”; “What are their properties?”. In this paper we partially give answers to these questions. We propose a mathematical programming model for the problem of determining an optimal network structure for decentralized nearest neighbour search. We have found the exact solutions for a regular lattice of size 4x4 and heuristic solutions for sizes from 5x5 to 7x7. As a distance function we use L_1, L_2 and L_inf metrics. We hope that our results and the proposed model will initiate study of optimal network structures for decentralized nearest neighbour search.
We examine an equilibrium concept for 2-person non-cooperative games with boundedly rational agents which we call Nash-2 equilibrium. It is weaker than Nash equilibrium and equilibrium in secure strategies: a player takes into account not only current strategies but also all profitable next-stage responses of the partners to her deviation from the current profile that reduces her relevant choice set. We provide a condition for Nash-2 existence in finite games and complete characterization of Nash-2 equilibrium in strictly competitive games. Nash-2 equilibria in Hotelling price-setting game are found and interpreted in terms of tacit collusion.
It is commonly the case in multi-modal pattern recognition that certain modality-specific object features are missing in the training set. We address here the missing data problem for kernel-based Support Vector Machines, in which each modality is represented by the respective kernel matrix over the set of training objects, such that the omission of a modality for some object manifests itself as a blank in the modality-specific kernel matrix at the relevant position. We propose to fill the blank positions in the collection of training kernel matrices via a variant of the Neutral Point Substitution (NPS) method, where the term ”neutral point” stands for the locus of points defined by the ”neutral hyperplane” in the hypothetical linear space produced by the respective kernel. The current method crucially differs from the previously developed neutral point approach in that it is capable of treating missing data in the training set on the same basis as missing data in the test set. It is therefore of potentially much wider applicability. We evaluate the method on the Biosecure DS2 data set.
The paper presents an open-source morphological processor of Russian texts recently developed and named CrossMorphy. The processor performs lemmatization, morphological tagging of both dictionary and non-dictionary words, contextual and non-contextual morphological disambiguation, generation of word forms, as well as morphemic parsing of words. Besides the extended functionality, emphasis is put on linguistic quality of word processing and easy integration into programming projects. CrossMorphy is fully implemented in C++ programming language on the base of OpenCorpora vocabulary data.
The article is devoted to the analysis of Amos 7:14 in the context of prophetic rhetoric. What does it mean if a personage of ancient Hebrew literature behaves like a prophet and at the same time denies that he is a prophet?
Abstract— In this paper a new multi-agent genetic algorithm for multi-objective optimization (MAGAMO) is presented. The algorithm based on the dynamical interaction of synchronized agents which are interdepended genetic algorithms (GAs) having own separate evolutions of their populations. This approach has some similarities with well known “island model” of GA. In both methods is used a migration of individuals from agents (“islands”) to the main process (“continent”). In contrast, the intelligent agents in MAGAMO are able to decompose the dimensions space to form evolutions of subpopulations (instead of distribution of initial population as in the standard “island model”). In the same time, the main (central) process is responsible for the coordination of agents only and their selection according Pareto rules (without evolution). Intelligent agents seek local suboptimal solutions for a global optimization, which will be completed in the result of the interaction of all agents. In the result of this, the amount of needed recalculating the fitness-functions can be significantly reduced. It is especially important for the multi-objective optimization related to a large-scale problem. Besides, the proposed approximating approach allows solving complex optimization problems for real big systems (like an oil company, plants, corporations, etc.).
The paper presents a supervised machine learning experiment with multiple features for identification of sentences containing verbal metaphors in raw Russian text. We introduce the custom-created training dataset, describe the feature engineering techniques, and discuss the results. The following set of features is applied: distributional semantic features, lexical and morphosyntactic co-occurrence frequencies, flag words, quotation marks, and sentence length. We combine these features into models of varying complexity; the results of the experiment demonstrate that fairly simple models based on lexical, morphosyntactic and semantic features are able to produce competitive results.
The natural language texts (NL-texts) from the newspapers, e-mail lists, various blogs, etc. are the important sources of information being able to stimulate the elaboration of a new plan of actions. The paper describes a new formal approach to developing multilingual algorithms of semantic-syntactic analysis of NL-texts. It is a part of the theory of K-representations - a new theory of designing semantic-syntactic analyzers of NL-texts with the broad use of formal means for representing input, intermediary, and output data. The current version of the theory is set forth in a monograph published by Springer in 2010. One of the principal constituents of this theory is a complex, strongly structured algorithm SemSynt1 carrying out semantic-syntactic analysis of texts from some practically interesting sublanguages of the English, German, and Russian languages. An important feature of this algorithm is that it doesn’t construct any syntactic representation of the inputted NL-text but directly finds semantic relations between text units. The other distinguished feature is that the algorithm is completely described with the help of formal means, that is why it is problem independent and doesn’t depend on a programming system. The peculiarities and some central procedures of the algorithm SemSynt1 are analyzed.
On the example of digital images there are investigated properties of the discrete Fourier transform (DFT) used to embed information into the phase spectrum. The investigation helped to form a new steganografic algorithm that can be used in case of non- compressed images. Peculiarity of the algorithm is changeable size of information put into the blocks of stego-image. Characteristics of the suggested algorithm are comparable with the analog ones, they allow to get the correct information without mistakes.
We present an efficient equivalence-checking algorithm for a propositional model of programs with semantics based on (what we call) progressive monoids on the finite set of statements generated by relations of a specific form. We consider arbitrary set of relations for commutativity (relations of the form ab=ba for statements a, b) and left absorption (relations of the form ab=b for statements a, b) properties. The main results are a polynomial-time decidability for the equivalence problem in the considered case, and an explicit description of an equivalence-checking algorithm which terminates in time polynomial in size of programs.
The paper sets forth a new way of considering impressionism under the frame of cognitonics. It is a new scientific discipline aimed at compensating the negative shifts in the cognitive-emotional development of personality and society caused by stormy progress of information and communication technologies (ICT) and globalization processes An original algorithm of transforming the negative emotions (caused by the messages received from social networks) into the positive ones is proposed. This algorithm considers the possible reactions of a human (including the recommended reactions) to the emotional attacks via social networks. A new look at impressionism underpins this algorithm. The algorithm is a part of an original interdisciplinary course “Foundations of secure living in information society”.
—Ramification in complete discrete valuation fields is studied. For the case of a perfect residue field, there is a well-developed theory of ramification groups. Hyodo introduced the concept of ramification depth associated with the different of an extension and obtained an inequality that combines the concept of ramification depth in a degree p2 cyclotomic extension with the concept of ramification depth in a degree p subextension. The paper gives a detailed consideration of the structure of degree p2 extensions that can be obtained by a composite of two degree p extensions. In this case, it is not required that the residue field be perfect. Using the concepts of wild and ferocious extensions and the defect of the main unit, degree p2 extensions are classified and more accurate estimates for the ramification depth are obtained. In a number of cases, exact formulas for ramification depth are presented.