The paper presents the results of the cognitive modeling of the COMPUTER SCIENCE terminological system in the form of a thesaurus. The thesaurus comprises over 3000 units, which are drawn from explanatory monolingual and bilingual dictionaries of computer science terms representing the basic phenomena and processes in the professional context. Methodologically, the analysis is based on the frame model and focuses on semantic relations specific to the sphere of computer science in terms of ontological and epistemological features. The thesaurus facilitates the detailed description and effective arrangement of the terminological system characterized by a complicated hierarchical structure, and thus plays a crucial role in forming and developing professional competencies.
There exist a number of mathematical problems in the literature concerning warehouse storage optimization. However, to the best of our knowledge the problem of finding optimal slot sizes in pallet rack system minimizing the occupied storage space is not studied. More precisely, the problem is to optimally choose k of m pallet racks with pallets in a warehouse, find optimal slot sizes in pallet racks for these pallets, and reassign the pallets to the new slots in order to reduce the occupied space maximally. In this paper we suggest a dynamic programming heuristic which finds optimal slot sizes for the given number k of pallet racks in a warehouse. Our computational experiments are based on real-life data and demonstrate the efficiency of the suggested algorithm.
An integral financial stability index is constructed using Israel macroeconomic data. Approaches relying on the use of dependent variable as well as principal component mathod and its modifications are examined. Obtained indexes are compared in terms of their forecast quality. In case of no dependent variable structural shift is analyzed.
This article describes the development of a free/open-source morphological description of Maltese, originally created as the analysis component in a rule-based machine translation system for Maltese to Arabic and later applied to other tasks. The lexicon formalism we use is lttoolbox, part of the Apertium machine translation platform. An evaluation of the analyser shows that the coverage is adequate, at 84.90%, while precision is 92.5% on a large automatically annotated test set and 96.2% on a smaller hand-validated set.
This paper study the world of education data and patent activity for the period of 1979-2006 years using the latest methods of pattern analysis: a linear pattern-classification and ordinal-invariant pattern clustering. Attempt is made to reflect the situation regarding primary, secondary and higher education in 37 countries.
Modern business driving requires agility in the reflecting changes of Service-oriented Enterprise Architecture (SoEA). For this purpose Subject-oriented Business Process Management (S-BPM) was used for facilitation of structured communication between process participants, process experts and resulted generation of collaborative business process. During such generation all necessary requirements for supporting resources (such as information, know-how, intellectual and professional skills, inputs and outputs, quality and operational risk limitations, moderation, control and monitoring) are taken into consideration using the operational knowledge of experts and project participants that act inside the predefined business area. The work results in updating the business model selecting business services from the virtual SOA torrent (that catches rated cloud services on the internet) and represents the basis for quickly adjustable “real-time” service-oriented enterprise architectures.
For a personnel selection problem we define a new mathematical approach and make a computer tool that finds effective stable matching between the set of employers and candidates. A characterization, main components and application with its advantages are given.
The possibility of using the belief function theory for developing of trading strategies is considered in this paper. An analysis of this approach is given on the data of the Russian stock market. The belief and plausibility functions (and their corresponding bodies of evidence) to the system’s recommendations (buy, sell or hold) are calculated using fuzzy inference methods for technical indicators. Further, these bodies of evidence are aggregated using the combining rules (Dempster’s rule, Yager’s rule and others). The discount coefficients of the bodies of evidence are calculated at the stage of the learning under the condition of maximizing the profitability of the trading strategy. The intervals for the buying or selling of assets are determined on the results of such combination. The decision about the corresponding action is taken after comparing these intervals. The study showed that the proposed approach provides an interesting result.
We develop the model for estimating the default probabilities of banks for Russia, Belarus, Kazakhstan and Ukraine using banking statistics from 2005 to 2013. We find that a binary logit regression works best. In addition, macroeconomic and institutional factors significantly improve model accuracy. The results indicate that there are significantly different sources of risk in banking in the considered banking systems. These results are useful for agents operating with CIS banks.
We propose extensions of the classical JSM-method and the Na ̈ıve Bayesian classifier for the case of triadic relational data. We performed a series of experiments on various types of data (both real and synthetic) to estimate quality of classification techniques and compare them with other classification algorithms that generate hypotheses, e.g. ID3 and Random Forest. In addition to classification precision and recall we also evaluated the time performance of the proposed methods.
In this paper we studyconnectivity between leading universities and academic institutions in the area of computer science for Moscow and Saint Petersburg. The research is based on scientific publications data available from http://eLibrary.ru. The focus is restricted to scientific papers in IT related areas, namely Informatics and Cybernetics, published during 2011-2013.We assume that two organizations are connected via co-authorship, a-linked, if there exists at least one paper, which has employees from these organizations among co-authors. In order to detect closely related yet not collaborating organizations formal context organizations-publishers is studied. We say that two institutions are connected via publishers, p-linked, if the pair forms extent of a formal concept.
In this paper we focus on two essential problems of maintenance decision support systems, namely, 1) detection of potential dangerous situation, and 2) classification of this situation in order to recommend an appropriate repair action. The former task is usually solved with the known statistical process control techniques. The latter problem can be reduced to the contextual multi-armed bandit problem. We propose a novel algorithm with Bayesian classification of abnormal situation and the softmax rule to explore the decision space. The dangerous situations are detected with the Shewhart control charts for the distances between the current and the normal situations. It is experimentally shown, that our algorithm is more accurate than the known contextual multi-armed methods with stochastic search strategies.
Optimization modeling in science and industry requires the use of state-of-the-art software and high-performance computing resources. A common problem faced by researchers is how to integrate related software and leverage available computing resources in a distributed environment. The paper presents an approach for solving this problem based on unifying remote access to optimization software via RESTful web services and using AMPL (A Mathematical Programming Language) to describe scenarios of computing with optimization models. This approach is implemented in AMPLX toolkit that enables modifying any AMPL script to solve problems by a pool of distributed solvers. The toolkit is based on Everest platform that is used to expose optimization tools as web services and run these tools across distributed resources. The proposed approach and AMPLX toolkit have been verified by a number of decomposition algorithms including branch-and-bound algorithm for a special nonlinear optimization problem.
This scientific work is dedicated to the development, improvement and application of double layer interval weighted graphs (DLIG) for non-stationary time series forecasting. This model appears to be the universal and easy-to-use tool for modeling the non-stationary time series and forecasting. We observe the double layer version of the model because it’s the most representative way in the sense of main idea though you can add several layers more for different purposes. The first layer of the graph is based on empirical fluctuations of system and displays the most potential fluctuations of the system at the time of system training. The second layer of the graph as a superstructure of the first layer displays the degree of modeling error and it’s connected with the first layer nodes by edges. The second layer is the way of supervised training implementation with the aim of error minimization.
The paper is the first to the knowledge of the authors to apply copula models to reconstructing joint distribution of time charter rates for dry bulk ship. Based on the Clarksons dataset for the last 20 years it is claimed that Gumbel copula is enough to perform the mentioned objective. To arrive at the conclusion the homogenous dataset in terms of copula structural shifts’ absence is used; a system of criteria for copula selection based on goodness-of-forecast criteria is implemented. The evidence suggests dry bulk time charter rates weekly returns exhibit symmetric distribution.
As an auxiliary output stands for the result of copula fit accounting for time dynamics and not. For the purpose of conservative analysis (i.e. risk-management) approach disregarding time-dynamics should be preferred as yielding the least number of value-at-risk breaches. From the risk budgeting perspective non-conservative approach (accounting for time dynamics) might be preferred as reflecting the rapidly changing value-at-risk.
Three situations associated with making decisions on investing in the development of a regional electrical grid are considered, and a game-theoretic approach to establishing the expediency of the investment in the grid and to finding optimal investment strategies for both the government and private investors in these situations is outlined. The implementation of this approach is associated with formulating and solving particular games on polyhedral sets of player strategies that are presented and discussed.
In the paper the indices for estimation of conflict and decreasing of ignorance in frame of Dempster-Shafer theory are introduced. Those indices are analyzed on the bodies of evidences of special type. It is shown that the great correlation between the bodies of evidence is a sufficient condition of decreasing of ignorance after the applying of combining rule.
This paper aims to present an alternative paradigm of financial risk to mitigate future financial crises. We argue that risk is not simply a feature of a financial product but a good in and of itself. Examining financial risk, we argue that it is most accurately typed as a common pool (particularly systemic risk) and so another approach to financial risk pricing is needed. We outline the basics of an ex-ante quasi-insurance fund to price financial risk. For more effective governance, risk-loving agents need to contribute to an ex-ante quasi-insurance fund. Insurance recipients would be risk-averse agents, wo do not contribute, as they are forced to participate in systemic risk-taking against their preferences. Our approach to financial risk combines a microprudential regulatory framework with macroprudential supervision.