We propose a new method for conducting Single transferable vote (STV) elections and provide a unified method for describing classic STV procedures (the Gregory Method, the inclusive Gregory method and the Weighted inclusive Gregory method) as an iterative procedure. We also propose a modification for quota definition that improves the theoretical properties of the procedures. The method is justified by utilising a new set of axioms. We show that this method extends the Weighted inclusive Gregory method with the modified definition of quota and random equiprobable selection of a winning coalition in each iteration. The results are extended to the methods, allowing fractional numbers of votes.

We compare the Egalitarian rule (aka Egalitarian Equivalent) and the Competitive rule (aka Competitive Equilibrium with Equal Incomes) to divide bads (chores). They are both welfarist: the competitive disutility profile(s) are the critical points of their Nash product on the set of efficient feasible profiles. The C rule is Envy Free, Maskin Monotonic, and has better incentives properties than the E rule. But, unlike the E rule, it can be wildly multivalued, admits no selection continuous in the utility and endowment parameters, and is harder to compute. Thus in the division of bads, unlike that of goods, no rule normatively dominates the other.

There has been a surge of interest in stochastic assignment mechanisms that have proven to be theoretically compelling thanks to their prominent welfare properties. Contrary to stochastic mechanisms, however, lottery mechanisms are commonly used in real life for indivisible goods allocation. To help facilitate the design of practical lottery mechanisms, we provide new tools for obtaining stochastic improvements in lotteries. As applications, we propose lottery mechanisms that improve upon the widely used random serial dictatorship mechanism and a lottery representation of its competitor, the probabilistic serial mechanism. The tools we provide here can be useful in developing welfare-enhanced new lottery mechanisms for practical applications such as school choice.

The Gibbard–Satterthwaite theorem is a cornerstone of social choice theory, stating that an onto social choice function cannot be both strategy-proof and non-dictatorial if the number of alternatives is at least three. The Duggan–Schwartz theorem proves an analogue in the case of set-valued elections: if the function is onto with respect to singletons, and can be manipulated by neither an optimist nor a pessimist, it must have a weak dictator. However, the assumption that the function is onto with respect to singletons makes the Duggan–Schwartz theorem inapplicable to elections which necessarily select multiple winners. In this paper we make a start on this problem by considering rules which always elect exactly two winners (such as the consulship of ancient Rome). We establish that if such a *consular election rule* cannot be expressed as the union of two disjoint social choice functions, then strategy-proofness implies the existence of a dictator. Although we suspect that a similar result holds for *k*-winner rules for k>2k>2, there appear to be many obstacles to proving it, which we discuss in detail.

We propose a generalization of the probabilistic voting model in two-candidate elections. We allow the candidates have general von Neumann–Morgenstern utility functions defined over the voting outcomes. We show that the candidates will choose identical policy positions only if the electoral competition game is constant-sum, such as when both candidates are probability-of-win maximizers or vote share maximizers, or for a small set of functions that for each voter define the probability of voting for each candidate, given candidate policy positions. At the same time, a pure-strategy local Nash equilibrium (in which the candidates do not necessarily choose identical positions) exists for a large set of such functions. Hence, if the candidate payoffs are unrestricted, the “mean voter theorem” for probabilistic voting models is shown to hold only for a small set of probability of vote functions.

This paper develops a novel approach to modeling preferences in monopolistic competition models with a continuum of goods. In contrast to the commonly used constant elasticity of substitution preferences, which do not capture the effects of consumer income and the intensity of competition on equilibrium prices, the present preferences can capture both effects. The relationship between consumers’ incomes and product prices is then analyzed for two cases: with and without income heterogeneity.

A problem of axiomatic construction of a social decision function is studied for the case when individual opinions of agents are given as *m*-graded preferences with arbitrary integer *m* ≥ 3. It is shown that the only rule satisfying the introduced axioms of Pairwise Compensation, Pareto Domination and Noncompensatory Threshold and Contraction is the threshold rule.

We consider the calculation of Nitzan-Kelly’s manipulability index in the impartial anonymous and neutral culture (IANC) model. We provide a new theoretical study of this model and an estimation for the maximal difference between manipulability indices in the IANC model and a basic model, the impartial culture (IC). The asymptotic behavior of this difference is studied with the help of the impartial anonymous culture (IAC) model. It is shown that the difference between the IAC and IANC models tends to zero as the number of alternatives or the number of voters grows. These results hold for any other probabilistic measure that is anonymous and neutral. Finally, we calculate Nitzan-Kelly’s index in the IANC model for four social choice rules and compare it with the IC model.

The paper offers new results about the probabilities of single-peaked preference profiles according to the impartial culture, impartial anonymous culture, impartial anonymous neutral culture, uniform culture, dual culture, and maximal culture assumptions. Two new probabilistic assumptions are studied. The uniform plurality culture assumption developed in the paper preserves uniformly distributed plurality votes, and it is easier than other culture assumptions. The case of abstention of voters is discussed.