This paper aims to present an alternative paradigm of financial risk to mitigate future financial crises. We argue that risk is not simply a feature of a financial product but a good in and of itself. Examining financial risk, we argue that it is most accurately typed as a common pool (particularly systemic risk) and so another approach to financial risk pricing is needed. We outline the basics of an ex-ante quasi-insurance fund to price financial risk. For more effective governance, risk-loving agents need to contribute to an ex-ante quasi-insurance fund. Insurance recipients would be risk-averse agents, wo do not contribute, as they are forced to participate in systemic risk-taking against their preferences. Our approach to financial risk combines a microprudential regulatory framework with macroprudential supervision.
A part of a country’s electrical grid in which an electricity generator (which may consist of several base load power plants and several peaking power plants) supplies electricity to a set of large customers of the grid, whereas the customers can a) receive electricity from renewable sources of energy, b) store electricity in certain volumes, and c) buy electricity in the markets is considered. It is proposed to describe the interaction of the generator, the large grid customers, and the transmission company (under uncertainty of the customer demand for electricity) by a game with a finite (more than three) number of players on polyhedra of player strategies some of which are connected and thus cannot be chosen by the players independently of each other. Sucient conditions for the game equilibria verifiable by solving three linear programming problems are proposed, and the equilibria particularly determine optimal production and selling strategies for the generator.
Single-shot grid-based path finding is an important problem with the applications in robotics, video games etc. Typically in AI community heuristic search methods (based on A And its variations) are used to solve it. In this work we present the results of preliminary studies on how neural networks can be utilized to path planning on square grids, e.g. how well they can cope with path finding tasks by themselves within the well-known reinforcement problem statement. Conducted experiments show that the agent using neural Q-learning algorithm robustly learns to achieve the goal on small maps and demonstrate promising results on the maps have ben never seen by him before.
In this paper we suggest a multi-start greedy heuristic for a real-life truck and trailer routing problem. The considered problem is a site dependent heterogeneous fleet truck and trailer routing problem with soft and hard time windows and split deliveries. This problem arises in delivering goods from a warehouse to stores of a big retail company. There are about 400 stores and 100 vehicles for one warehouse. Our heuristic is based on sequential greedy insertion of a customer to a route with further improvement of the solution. The computational experiments are performed for real-life data. We also provide a mixed integer linear programming formulation for precise and clear description of the problem.
This paper examines the coarse-grained approach to parallelization of the branch-and-bound (B&B) algorithm in a distributed computing environment. This approach is based on preliminary decomposition of a feasible domain of mixed-integer programming problem into a set of subproblems. The produced subproblems are solved in parallel by a distributed pool of standalone B&B solvers. The incumbent values found by individual solvers are intercepted and propagated to other solvers to speed up the traversal of B&B search tree. Presented implementation of the approach is based on SCIP, a non-commercial MINLP solver, and Everest, a web-based distributed computing platform. The implementation was tested on several mixed-integer programming problems and a noticeable speedup has been achieved. In the paper, results of a number of experiments with the Traveling Salesman Problem are presented.
There is a lot of usefulness measures of patterns in data mining. This paper is focused on the measures used in Formal Concept Analysis (FCA). In particular, concept stability is a popular relevancy measure in FCA. Experimental results of this paper show that high stability of a pattern in a given dataset derived from the general population suggests that the stability of that pattern is high in another dataset derived from the same population. At the second part of the paper, a new estimate of stability is introduced and studied. It es performance is evaluated experimentally. And it is shown that it is more efficient.
The construction of time series linguistic summaries is a topic that draws attention of researchers for many years. The full-fledged software implementation (the pilot web-application) that supports the complete process of linguistic summarization of time series construction is presented in the paper. The program can be used in professional groups for discussions and rapid data analysis. Virtual mobile crash reporting system (MCRS) supplies the test input data used as an example.
Aleskerov et al.  and  estimated the degree of manipulability for the case of multi-valued choice (without using any tie-breaking rule) and for Impartial Culture (IC). In our paper, we address the similar question for the multi-valued choice and for Impartial Anonymous Culture (IAC). We use Nitzan-Kelly's (NK) index to estimate the degree of manipulability, which is calculated as the share of all manipulable voting situations, and calculate indices for 3 alternatives and up to 10000 voters. We have found that for the case of 3 alternatives Nanson's procedure shows the best results. Hare's procedure shows close, but a bit higher results. The worst aggregation procedure in terms of manipulability is Plurality rule. Additionally, it turned out that NK indices for IAC are smaller than NK indices for IC.
We investigate models of two-sided matching markets without transfers. Examples of such markets include marriage market, universities-applicants market and others. Gale and Shapley in 1962 first introduced this kind of problems in the literature. They considered one-to-one and one-to-many markets, where preferences of individuals on the one side over individuals on the other side were strict.
In this paper we analyze a modification of the classical Gale-Shapley admission problem, where preferences of universities are considered to be interval orders. Interval order allows a specific form of indifference in the preference relation. Imagine, each alternative is described with an interval [l, u], and one alternative dominates another if and only if intervals do not overlap and lower bound of the first interval is greater than upper bound of the second interval. Preferences with such property may occur in the cases, when applicants’ scoring system (interview, exam or sum of points) is not exactly accurate.
In the previous paper we have shown the existence of a stable matching and provided the criteria of applicant Pareto-optimality of a stable matching, based on Stable Improvement Cycles.
However, the Pareto-efficient stable mechanism is not (in general) strategy-proof for applicants. We provide a strategy-proof applicant-proposing deferred acceptance with tie-breaking, where tie-breaking procedure is organized in a special way. This special tie-breaking allows to lower chances of an applicant-inefficient stable matching (in comparison to that with random-tie breaking).
Distributed environments with the decoupling of users from resource providers are generally termed as utility Grids. The paper focuses on the problems of efficient job flow distribution and scheduling in virtual organizations (VOs) of utility Grids while ensuring the VO stakeholders preferences and providing dependable strategies for resources utilization. An approach based on the combination of the cyclic scheduling scheme, backfilling and several heuristic procedures is proposed and studied. Comparative simulation results are introduced for different algorithms and heuristics depending on the resource domain composition and heterogeneity. Considered scheduling approaches provide different benefits depending on the VO scheduling objectives. The results justify the use of the proposed approaches in a broad range of the considered resource environment parameters
Previous publications presented an implementation of coarse-grained parallel Branch-and-Bound (B&B) method. It supports two strategies: decomposition of feasible domain into a set of sub-problems; “multisearch” (or concurrent) solving the same problem with different settings of the BNB-method. In both cases several B&B-solver’s processes exchange best values of goal function on feasible solutions they found. Earlier, it was found that this approach gives a noticeable speed-up of solving the original optimization problem. The method has been implemented as an Everest-application called DDBNB (Domain Decomposition B&B). It is based on Everest distributed computing platform, everest.distcomp.org, and open source solvers: CBC, projects.coin-or.org/Cbc; SCIP, scip.zib.de. Originally, DDBNB has been tested on discrete optimization problems (Traveling Salesman and Load Balancing). The paper presents application of DDBNB to global optimization problems with polynomial functions (solver SCIP supports solving of such class of problems). Well known problems concerning optimal placements of points on 3D-sphere are considered: Tammes and Thomson problems. The first problem is to maximize minimal distance between any two of N points. The second - minimize electrostatic Coulomb energy of unit charges put in N points. Both may be represented as mathematical programming problems with polynomial functions. Feasible domains of both problems are direct products of 3D-spheres and different decompositions of this domains by linear inequalities into hundreds of subproblems have been tested. Tests performed gave promising results and crucial acceleration for Tammes problems, N=7,8,9, and Thomson problem, N=5. One should note that Thomson problem got computer aided proof (substantially based on the problem’s specifics) less than a decade ago, when proposed approach is rather generic and does not go into details of the problem to be solved. Formulation of original problems and decomposition strategies have been implemented via Pyomo, pyomo.org. All sub-problems have been passed to solvers as AMPL-stubs, (also known as NL-files). Usage of Everest platform enables to involve in experiments HPC resources of NRC Kurchatov Institute.
We describe the method of pattern analysis and the results of its application to the problem of analyzing the development of science, education and the success of innovative activity in the regions of the Russian Federation. We examine characteristics of the regions of Russia such as the level of socio-economic conditions and the potential and efficiency of science, education and innovative activity from 2007 to 2010. Also we obtain a classification of regions by the similarity of the internal structure of these indicators, construct trajectories of regional development over time, and find groups of regions carrying out similar strategies.
In this paper, we deal with problems of efficient resource management and scheduling in utility Grids. There are global job flows from external users along with resource owners’ local tasks upon resource non-dedication condition. Competition for resource reservation between independent users, local and global job flows substantially complicates scheduling and the requirement to provide the necessary quality of service. A meta-scheduling model, justified in this work, assumes a complex combination of job flow dispatching and application-level scheduling methods for jobs, as well as resource sharing and consumption policies established in virtual organizations (VOs) and based on economic principles. A solution to the problem of fair resource sharing among VO stakeholders with simulation studies is proposed.
Behavior planning is known to be one of the basic cognitive functions, which is essential for any cognitive architecture of any control system used in robotics. At the same time most of the widespread planning algorithms employed in those systems are developed using only approaches and models of Artificial Intelligence and don’t take into account numerous results of cognitive experiments. As a result, there is a strong need for novel methods of behavior planning suitable for modern cognitive architectures aimed at robot control. One such method is presented in this work and is studied within a special class of navigation task called smart relocation task. The method is based on the hierarchical two-level model of abstraction and knowledge representation, e.g. symbolic and subsymbolic. On the symbolic level sign world model is used for knowledge representation and hierarchical planning algorithm, MAP, is utilized for planning. On the subsymbolic level the task of path planning is considered and solved as a graph search problem. Interaction between both planners is examined and inter-level interfaces and feedback loops are described. Preliminary experimental results are presented.
In this paper presented slot selection algorithms in economic model for independent job batch scheduling in a distributed computing with non-dedicated resources. Exiting approaches towards resource co-allocation and multiprocessor job scheduling in economic models of distributed computing are based on search of time-slots in resource occupancy schedules. The sought time-slots must match requirements of necessary span, computational resource properties, and cost. Usually such scheduling methods consider only one suited variant of time-slot set. This paper discloses a scheduling scheme that features multi-variant search. Two algorithms of linear complexity for search of alternative variants are compared. Having several optional resource configurations for each job makes an opportunity to perform an optimization of execution of the whole bath of jobs and to increase overall efficiency of scheduling.
It is suggested an approach to solving optimization problems for aggregated criterion as alinear convolution of partial criteria, where the weights are obtained on the basis of ranking partial criteria in order of importance. The procedure, based on the method of small parameter, is proposed in order to estimate the change of optimal solutions when there is an elementary change of expert judgments.
Cloud computing has emerged as a new paradigm for on-demand access to a wast pool of computing resources that provides an alternative to using on-premises resources. This paper discusses the challenges related to using the cloud computing infrastructures for scientific computing. An approach based on Everest platform addressing these challenges is presented along with the prototype integration of Everest with Google Compute Engine. The proposed integration enables Everest users to seamlessly provision and use cloud-based computing resources for running different types of workloads including HPC and HTC applications. In contrast to other efforts, the presented approach also supports building and sharing domain-specific web services that automate execution of applications on dynamically provisioned cloud resources or hybrid infrastructures.