In situations when a group of people has to make a decision based on the set of individual preferences, they use a certain aggregation method, in particular, voting. One of the main problems for any non-dictatorial social choice rule is the possibility for the voters to achieve a more preferable outcome of the voting by misrepresenting their preferences. Such actions on behalf of the voters are called manipulation, or strategic voting. One approach used to compare social choice rules in terms of how hard they are to manipulate is to find the complexity classes of manipulation problems for a given aggregation method. In this work, we present a survey of the studies of complexity classes of manipulation problems under various model assumptions and constraints.
Opinion dynamics in network structures of special type are considered. Each node consists of two agents interacting with each other. The properties of consensus arising in such structures are studied using the DeGroot model.
The role of strategic IT consulting in consulting service lifecycle is determined. Practical issues of its basic directions such as strategic IT audit, IT strategy development, IT service organization and transfer to outsourcing, are discussed.
Twelve sets, proposed as social choice solution concepts, are compared: the core, five versions of the uncovered set, two versions of the minimal weakly stable sets, the uncaptured set, the untrapped set, the minimal undominated set (strong top cycle) and the minimal dominant set (weak top cycle). The main results presented are the following. 1) A criterion to determine whether an alternative belongs to a minimal weakly stable set is found. It establishes the logical connection between minimal weakly stable sets and covering relation. 2) In tournaments and in general case it is determined for all twelve sets, whether each two of them are related by inclusion or not. 3) In tournaments the concept of stability is employed to generalize the notions of weakly stable and uncovered sets. New concepts of k-stable alternatives and k-stable sets are introduced and their properties and mutual relations are explored. 4) A concept of the minimal dominant set is generalized. It helps to establish that in general case all dominant sets are ordered by strict inclusion. In tournaments the hierarchies of the classes of k-stable alternatives and k-stable sets combined with the system of dominant sets constitute tournament’s structure (“microstructure” and “macrostructure” respectively). This internal structure may be treated as a system of reference, which is based on difference in degrees of stability. An algorithm for calculating the minimal dominant sets and the classes of k-stable alternatives is also given.
The direct and inverse projections (DIP) method was proposed to reduce the feature space to the given dimensions oriented to the problems of randomized machine learning and based on the procedure of “direct” and “inverse” design. The “projector” matrices are determined by maximizing the relative entropy. It is suggested to estimate the information losses by the absolute error calculated with the use of the Kullback–Leibler function (SRC method). An example illustrating these methods was given.
This paper suggests a new randomized forecasting method based on entropy-robust estimation for the probability density functions (PDFs) of random parameters in dynamic models described by the systems of linear ordinary differential equations. The structure of the PDFs of the parameters and measurement noises with the maximal entropy is studied. We generate the sequence of random vectors with the entropy-optimal PDFs using a modification of the Ulam–von Neumann method. The developed method of randomized forecasting is applied to the world population forecasting problem.
This paper suggests a new randomized forecasting method based on entropy-robust estimation for the probability density functions (PDFs) of random parameters in dynamic models described by the systems of linear ordinary differential equations. The structure of the PDFs of the parameters and measurement noises with the maximal entropy is studied. We generate the sequence of random vectors with the entropy-optimal PDFs using a modification of the Ulam-von Neumann method. The developed method of randomized forecasting is applied to the world population forecasting problem.
In each node of a network, economy is described by the simple two-period Romer’s model of endogenous growth with production and knowledge externalities. The sum of knowledge levels in the neighbor nodes causes an externality in the production of each node of the network. The game equilibrium in the network is investigated. The agents’ solutions depending on the size of externality are obtained. The uniqueness of inner equilibrium is proved. The role of passive agents in network formation is studied; in particular, the possibilities of adding a passive agent to a regular network, and also of joining of regular networks through nodes with passive agents. It is shown that the sum of knowledge levels in all the nodes decreases under adding of a new link.
We consider LS-, LAD-, R-, M-, S-, LMS-, LTS-, MM-, and HBR-estimates for the parameters of a linear regression model with unknown noise distribution. With computer modeling for medium sized samples, we compare the accuracy of the considered estimates for the most popular probability distributions of noise in a regression model. For different noise distributions, we analytically compute asymptotic efficiencies of LS-, LAD-, R-, M-, S-, and LTS- estimates. We give recommendations for practical applications of these methods for different noise distributions in the model. We show examples on real datasets that support the advantages of robust estimates.
Foreword to the thematical issue devoted to the seventieth anniversary of Academician V.S. Tanaev
We consider a known test bench calibration scheme for a strapdown inertial navigation system (SDINS) that consists of sequential rotations of the SDINS on the test bench. We propose a mathematical formalization of this calibration scheme that lets us embed the calibration problem to stochastic Kalman setting of the estimation problem.
We consider an extension of the classical model of generalized Gale-Shapley matchings. The model describes a two-sided market: on one side, universities each of which has a restriction on the number of enrolled students; on the other side, applicants each of which can get a single place in the university. Both applicants and universities have preferences with respect to the desired distribution. We assume that each applicant constructs a linear order on the set of desired universities, and each university has preferences that are simplest semiorders For this modification, we show that a stable matching always exists. Moreover, we formulate necessary and sufficient conditions for Pareto optimality of the stable matching.
The problem of building the rating of a remote training system by processing the results of a run of tests was considered. The Rasch model extended to a run of tests was used. A recurrent algorithm based on the maximum-likelihood procedure and the Newton method was proposed to calculate the rating.
For the multinomenclature EOQ reserve control model with renting storage space and delayed payment for the orders, we propose an approach to optimization that accounts for the temporal value of money. We estimate the influence of these delays both on the parameters of the optimal control strategy and on the efficiency of using floating capital. The study has been done for situations when the said delays let one pay for orders from the revenue on repeat order intervals. We establish a necessary and sufficient condition imposed on the duration of delays that guarantees that this order payment from the revenue is possible. We illustrate characteristic features of optimization procedures with numerical computations.
A new method was proposed to solve the global minimization problems of the Hölder functions on compact sets obeying continuous functions. The method relies on the Monte Carlo batch processing intended for constructing the sequences of values of the “quasi-global” minima and their decrements. A numerical procedure was proposed to generate a probabilistic stopping rule whose operability was corroborated by numerous tests and benchmarks with algorithmically defined functions.