Add a filter

Of all publications in the section: 30

Sort:

by name

by year

Working paper

Added: Apr 8, 2015

Working paper

The Competitive Industrial Performance index (developed by experts of the UNIDO) is designed as a measure of national competitiveness. Index is an aggregate of eight observable variables, representing different dimensions of competitive industrial performance. Instead of using a cardinal aggregation function, what CIP’s authors do, it is proposed to apply ordinal ranking methods borrowed from social choice: either direct ranking methods based on the majority relation (e.g. the Copeland rule, the Markovian method) or a multistage procedure of selection and exclusion of the best alternatives, as determined by a majority relation-based social choice solution concept (tournament solution), such as the uncovered set and the minimal externally stable set. The same method of binary comparisons based on the majority rule is used to analyse rank correlations. It is demonstrated that the ranking is robust but some of the new aggregate rankings represent the set of criteria better than the original ranking based on the CIP.

Added: Sep 25, 2014

Working paper

We propose a new method for assessing agents’ influence in network structures, which takes into consideration nodes attributes, individual and group influences of nodes, and the intensity of interactions. This approach helps us to identify both explicit and hidden central elements which cannot be detected by classical centrality measures or other indices.

Added: Jul 13, 2016

Working paper

An approach is proposed to determine structural shift in time-series assuming non-linear dependence of lagged values of dependent variable. Copulas are used to model non-linear dependence of time series components. Several nice properties of copula application to time series are discussed. To identify the break copula structural shift test is applied. Data on quarterly GDP growth rate for the US from 1947 till 2012 is used as an empirical example. It is shown that the proposed approach captures the recession of 1981-1982 as the key break date in GDP growth rate series time structure that cannot be identified by standard time series structural break tests.

Added: Feb 10, 2013

Working paper

Data Envelopment Analysis is a well-known non-parametric technique of efficiency evaluation which is actively used in many economic applications. However, DEA is not very well applicable when a sample consists of firms operating under drastically different conditions. Generally, it is difficult to define to what extent the analyzed sample is heterogeneous. We offer a new method of efficiency estimation based on a sequential exclusion of alternatives and standard DEA approach. This allows to assess efficiency in the case of heterogeneous set of firms. We obtain a connection between efficiency scores obtained via standard DEA model and the ones obtained via our algorithm. We also evaluate 29 Russian universities and compare results obtained by two techniques.

Added: Oct 2, 2013

Working paper

In this work I analyze the effect of electoral uncertainty on issue trespassing. I build a model of political competition between two candidates in which each candidate decides how much effort to spend in order to increase her competence on each of the two issues. It is assumed that there are two groups of voters, each believing that one of the two issues is more salient. Each candidate is strong on one issue (so the costs of increasing the competence on that issue are lower), and weak on the other issue. I also assume that there is electoral uncertainty: the voters receive a valence shock in favor of one of the two candidates. I show that the effect of electoral uncertainty is conditional upon the payoffs to the candidates with respect to their vote shares. Electoral uncertainty results in more issue trespassing (when candidates focus more on the strong issues of their opponents) only if winning the election by a large margin confers additional benefits relative to winning by a narrow margin, and there are no benefits from losing by a narrow margin relative to losing by a wide margin. I also show that the competition on both issues is the strongest if the voter valuation of these issues is homogeneous, when more information on voter preferences is available to the candidates, and when the costs of competing on either strong or weak issues are lower.
This work was completed with the support of the HSE Scientific Fund, grant no. 11-01-0035

Added: Oct 2, 2013

Working paper

A new decomposition approach to complex systems analysis is suggested. The conventional approach deals with the construction of a single, “the most correct”, decomposition of the considered system. Meanwhile the suggested approach is oriented to the construction of a family of decompositions, whose properties reveal some important meaningful features of the initial system. The expedience and applicability of the elaborated approach are illustrated by three well-known and important cases: automatic classification, political voting body and stock market. In these cases, the presented results cannot be obtained by other known methods. These examples confirm the advantages of the suggested approach.

Added: Oct 20, 2017

Working paper

This paper proposes a novel method, referred to as ParGenFS, for finding a most specific generalization of a query set, represented by a fuzzy set of topics assigned to leaves of the rooted tree of a taxonomy. This generalization lifts the query set to one or several head subjects in the higher ranks of the taxonomy. The head subject is supposed to tightly cover the query set, however dispersed that can be over branches of the tree, possibly bringing in some gaps, that are taxonomy nodes covered by the head subject but irrelevant to the set. To balance that, we admit some offshoots, that are nodes belonging to the query set but not covered by the head subject. The method globally minimizes the total number of head subjects and gaps and offshoots, differently weighted. Our algorithm is applied to the structural analysis and description of a collection of 17685 abstracts of research papers published in 17 Springer journals on data science for the 20-years period 1998–2017. Our taxonomy of Data Science (DST) is extracted from the international Association for Computing Machinery Computing Classification System 2012 (ACM-CCS), a six-layer hierarchical taxonomy manually developed by a team of ACM experts. The DST also involves a number of additions detailing the leaves of the ACM-CCS taxonomy and added by ourselves. We find fuzzy clusters of leaf topics over the text collection, with a specially developed machinery. Three of the clusters are thematic indeed, relating to Data Science sub-areas: (a) learning, (b) information retrieval, and (c) clustering. These three clusters are lifted with ParGenFS in the DST, which allows us to make some conclusions of the tendencies of the developments in these areas.

Added: Jan 29, 2019

Working paper

Ten years after the global crisis of 2007–2009 the financial regulation has being enhanced with the pace of world stock markets growth. Latter ones have hit their historical maximum values being two to three fold higher than on the eve of the crisis. Such prudential tightening incentivizes using financial technologies to create new banking products and optimize regulatory burden. Same time it inflates the stock market bubble leading to greater fragility and increases the probability of another global crisis. Current research shows how human psychology has to be accounted for. This is relevant both for humans managing financial institutions as objects of regulation and humans benefiting from regulation when consuming financial services. It is shown that unconventional policy measure of abandoning both regulation and state deposit insurance enables to enhance financial stability. It implies more conservative behaviour and diminishes risk-appetite for both financiers and their clients.

Added: Oct 12, 2018

Working paper

The generalized knockout tournaments for an arbitrary number of participants in one match are designed. A combinatorial approach for generalized knockout tournament seedings is developed. Several properties of knockout tournament seedings are investigated. Several new knockout tournaments seedings are proposed and justified by the set of properties.

Added: Dec 12, 2017

Working paper

We analyze a heterogeneity of the educational system on the basis of one parameter: input grades of university students. We propose a mathematical model based on the construction of universities’ interval order. We use the Hamming distance to evaluate the heterogeneity of the educational system, and the Unified State Examination (USE) scores of Russian students to illustrate the application of the model. We show that institutions taking weak students turn the whole system of universities into a poorly structured nonhomogeneous system. In contrast, after deleting the weakest part, the remaining set of universities becomes a well-structured system.

Added: Apr 21, 2014

Working paper

The paper analyses international migration flows from the network perspective by the evaluation of centrality indices. In order to find the most influential countries in the international migration network classical centrality indices and new centrality indices are evaluated. New centrality indices consider short (SRIC) and long-range (LRIC) indirect interactions and the node attribute – population of the destination country. The model is applied to the annual data on international migration flows from 1970 to 2013 provided by United Nations Organization. The analysis is made for one year of each decade and indices’ dynamics is described. It is shown that countries with huge migration flows are outlined by both classical and SRIC, LRIC indices, and SRIC and LRIC indices point out countries with considerable outflows of migrants to countries highly involved in international migration and the most interconnected countries.

Added: Nov 2, 2016

Working paper

We consider different choice procedures such as scoring rules, rules, using majority relation, value function and tournament matrix, which are used in social and multi-criteria choice problems. We focus on the study of the properties that show how the final choice is changed due to changes of preferences or a set of feasible alternatives. As a result a theorem is provided showing which normative properties (rationality, monotonicity, non-compensability) are satisfied for the given choice procedures.

Added: Oct 20, 2015

Working paper

In general, the complexity of the algorithm to calculate the power indices, both classical and depending on agents’ preferences, grows exponentially with the number of agents. But in the important specific case when all players have the same number of votes it is possible to compute preference-based indices for the most types of these indices and for all voting bodies.

Added: Oct 27, 2015

Working paper

This report elaborates on an approach to measuring of the level of research results recently proposed by
one of the co-authors. The approach involves a taxonomy of the research domain, that is, a hierarchy representing
the domain’s structure. The level of results is evaluated according to the taxonomy ranks of the subjects that
have emerged or have been crucially transformed due to the results by the scientist under consideration. We
also consider two more conventional approaches for scoring the research impact over (a) citation metrics
and (b) merit metrics. To aggregate individual criteria in these approaches, we use an in-house automated
criteria-weighting method oriented towards as tight a representation of the strata as possible. To compare –
and combine – the three approaches empirically, we use a sample of publicly available data of scientists in
the areas of data analysis and machine learning. As the domain’s taxonomy, we use a corresponding part of
the ACM Computing Classification System 2012 and slightly modify it to better reflect results by the scientists
in our sample. The obtained ABC stratifications concur with intuition. Besides, they are rather far from each
other. This supports the view that all the three approaches (citations, merits, taxonomic rank) should be
considered as different aspects, and, therefore, a good method for scoring research impact should involve all
the three.

Added: Dec 25, 2014

Working paper

We introduce and study two specific types of manipulation in social choice problem. These are the standard manipulation with the restriction that the coalitions can be formed only by the candidates with the same first alternative in their preferences. The second type also demands that after the manipulation the top alternative in the preferences of the coalitions’ participants must win the election. The probabilities that such manipulation will occur in a 3-candidate election of Borda type are computed. An algorithm for producing necessary and sufficient conditions for a profile to be manipulable under weighted scoring voting rules is presented.
The author is grateful to F. Aleskerov for plenty enlightening discussions. The author acknowledges the support of the International Laboratory of Decision Choice and Analysis (National Research University Higher School of Economics).

Added: Oct 2, 2013

Working paper

Ranking is an important part of several areas of contemporary research, including social sciences, decision theory, data analysis and information retrieval. The goal of this project is to align developments in quantitative social sciences and decision theory with the current thought in computer science, including a few novel results. Specifically, we consider binary preference relations, the so-called weak orders that are in one-to-one correspondence with rankings. We show that the conventional symmetric difference distance between weak orders, considered as sets of ordered pairs, coincides with the celebrated Kemeny distance between the corresponding rankings, despite the seemingly much simpler structure of the former. Based on this, we review several properties of the geometric space of weak orders involving the ternary relation “between”, and contingency tables for cross-partitions. Next we reformulate the consensus ranking problem as a variant of finding an optimal linear ordering, given a correspondingly defined consensus matrix. The difference is in a subtracted term, the partition concentration, that depends only on the distribution of the objects in the individual parts. We apply our results to the conventional Likert scale to show that the Kemeny consensus rule is rather insensitive to the data under consideration and, therefore, should be supplemented with more sensitive consensus schemes.

Added: Oct 14, 2017

Working paper

It is well known that a pure-strategy Nash equilibrium does not exist for a two-player rentseeking contest when the contest success function parameter α is greater than two. We analyze the contest using the concept of an equilibrium in secure strategies, which is a generalization of the Nash equilibrium. It is defined by two conditions: no one can get profit from worsening the situation of other players and no one can get profit without creating a threat to lose more than he gains. We show that such an equilibrium always exists. Moreover, for α > 2 it is unique up to a permutation, and has a lower rent dissipation than in a mixed-startegy Nash equilibrium.

Added: Oct 2, 2013

Working paper

The paper presents two new approaches to modeling the interaction of small and medium price-taking traders with a stock exchange. In the framework of these approaches, the traders can form and manage their portfolios of financial instruments traded on a stock exchange with the use of linear, integer, and mixed programming techniques. Unlike previous authors’ publications on the subject, besides standard securities, the present publication considers derivative financial instruments such as futures and options contracts. When a trader can treat price changes for each financial instrument of her interest as those of a random variable with a known (for instance, a uniform) probability distribution, finding an optimal composition of her portfolio turns out to be reducible to solving an integer programming problem. When the trader possesses no particular information on the probability distribution of the above-mentioned random variable for financial instruments of her interest but can estimate the areas to which the prices of groups of financial instruments are likely to belong, a game-theoretic approach to modeling her interaction with the stock exchange is proposed. In antagonistic games modeling the interaction in this case, finding the exact value of the global maximin describing the trader’s guaranteed financial result in playing against the stock exchange, along with the vectors at which this value is attained, is reducible to solving a mixed programming problem. Finding the value of an upper bound for this maximin (and the vectors at which this upper bound is attained) is reducible to finding a saddle point in an auxiliary antagonistic game on disjoint polyhedra, which can be done by solving linear programming problems forming a dual pair.

Added: Jun 27, 2016

Working paper

Two mathematical models formalizing the decision-making process by a trader on developing and changing her investment portfolio in a stock exchange are presented. According to the first model the trader can correctly predict future values of financial securities of her interest. In this case, the problem of finding optimal strategies of investing in these financial securities is reduced to solving a linear programming problem. Under the second model, by means of linear inequalities of a balance type, the trader can estimate the area in which the values of the whole spectrum of the above financial securities may change. In this case, the same problem is formulated as an antagonistic game, analogous to the game with nature, with a nonlinear payoff function. It is proven that saddle points in this game can be found by solving linear programming problems forming a dual pair.

Added: May 31, 2015

Working paper

At multivariate ranking, one may concentrate not on the problem of strict ordering of the objects but rather on tying them up in groups of more or less similar entities. Following Sociology and Mineralogy, such tied groups can be referred to as strata. A popular “univariate” ABC-classification problem, in fact, partitions the set in three strata. This work proposes two novel algorithms for automatic stratification with a prespecified number of strata. One of our algorithms chooses criteria weights in such a way that the objects within each single stratum are located on the axis of the aggregate criterion as close to each other as possible. The other, in contrast, takes all the criteria to be incomparable, so that the strata approximate Pareto boundaries of the vector preference relation. These methods are experimentally compared with a set of known ranking methods, for which we propose some strata generation mechanisms.

Added: Oct 2, 2013

1
2