Book chapter
EGuaranteeNash for Boolean Games Is NEXP-Hard
Boolean games are an expressive and natural formalism through which to investigate problems of strategic interaction in multiagent systems. Although they have been widely studied, almost all previous work on Nash equilibria in Boolean games has focused on the restricted setting of pure strategies. This is a shortcoming as finite games are guaranteed to have at least one equilibrium in mixed strategies, but many simple games fail to have pure strategy equilibria at all. We address this by showing that a natural decision problem about mixed equilibria: determining whether a Boolean game has a mixed strategy equilibrium that guarantees every player a given payoff, is NEXP-hard. Accordingly, the epsilon variety of the problem is NEXP-complete. The proof can be adapted to show coNEXP-hardness of a similar question: whether all Nash equilibria of a Boolean game guarantee every player at least the given payoff.
In book
This book constitutes the proceedings of the 13th International Computer Science Symposium in Russia, CSR 2018, held in Moscow, Russia, in May 2018.
The 24 full papers presented together with 7 invited lectures were carefully reviewed and selected from 42 submissions. The papers cover a wide range of topics such as algorithms and data structures; combinatorial optimization; constraint solving; computational complexity; cryptography; combinatorics in computer science; formal languages and automata; algorithms for concurrent and distributed systems; networks; and proof theory and applications of logic to computer science.
M. Rabin's principle asserts that the depth of any algebraic decision tree, recognizing a closed orthant in scRn, is no less than n. Using the techniques of Newton polyhedra, we give the shortest possible proof of this fact, extending it to arbitrary collections of open or closed orthants, and apply it to trees distinguishing real polynomials having at least l real roots.
This book constitutes the refereed proceedings of the 22st Annual European Symposium on Algorithms, ESA 2014, held in Wrocław, Poland, in September 2014, as part of ALGO 2014. The 69 revised full papers presented were carefully reviewed and selected from 269 initial submissions: 57 out of 221 in Track A, Design and Analysis, and 12 out of 48 in Track B, Engineering and Applications. The papers present original research in the areas of design and mathematical analysis of algorithms; engineering, experimental analysis, and real-world applications of algorithms and data structures.
We investigate regular realizability (RR) problems, which are the prob- lems of verifying whether intersection of a regular language – the input of the problem – and fixed language called filter is non-empty. In this pa- per we focus on the case of context-free filters. Algorithmic complexity of the RR problem is a very coarse measure of context-free languages com- plexity. This characteristic is compatible with rational dominance. We present examples of P-complete RR problems as well as examples of RR problems in the class NL. Also we discuss RR problems with context- free filters that might have intermediate complexity. Possible candidates are the languages with polynomially bounded rational indices.
The problem of quick detection of central nodes in large networks is studied. There are many measures that allow to evaluate a topological importance of nodes of the network. Unfortunately, most of them cannot be applied to large networks due to their high computational complexity. However, if we narrow the initial network and apply these centrality measures to the sparse network, it is possible that the obtained set of central nodes will be similar to the set of central nodes in large networks. If these sets are similar, the centrality measures with a high computational complexity can be used for central nodes detection in large networks. To check the idea, several random networks were generated and different techniques of network reduction were considered. We also adapted some rules from social choice theory for the key nodes detection. As a result, we show how the initial network should be narrowed in order to apply centrality measures with a high computational complexity and maintain the set of key nodes of a large network.
This book constitutes the refereed proceedings of the 23rd Annual Symposium on Combinatorial Pattern Matching, CPM 2012, held in Helsinki, Finalnd, in July 2012. The 33 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 60 submissions. The papers address issues of searching and matching strings and more complicated patterns such as trees, regular expressions, graphs, point sets, and arrays. The goal is to derive non-trivial combinatorial properties of such structures and to exploit these properties in order to either achieve superior performance for the corresponding computational problems or pinpoint conditions under which searches cannot be performed efficiently. The meeting also deals with problems in computational biology, data compression and data mining, coding, information retrieval, natural language processing, and pattern recognition.
We study the following computational problem: for which values of k, the majority of n bits MAJn can be computed with a depth two formula whose each gate computes a majority function of at most k bits? The corresponding computational model is denoted by MAJk o MAJk. We observe that the minimum value of k for which there exists a MAJk o MAJk circuit that has high correlation with the majority of n bits is equal to Θ(n1/2). We then show that for a randomized MAJk o MAJk circuit computing the majority of n input bits with high probability for every input, the minimum value of k is equal to n2/3+o(1). We show a worst case lower bound: if a MAJk o MAJk circuit computes the majority of n bits correctly on all inputs, then k ≥ n13/19+o(1). This lower bound exceeds the optimal value for randomized circuits and thus is unreachable for pure randomized techniques. For depth 3 circuits we show that a circuit with k = O(n2/3) can compute MAJn correctly on all inputs.
We consider the coloring problem for hereditary graph classes, i.e. classes of simple unlabeled graphs closed under deletion of vertices. For the family of the hereditary classes of graphs defined by forbidden induced subgraphs with at most four vertices, there are three classes with an open complexity of the problem. For the problem and the open three cases, we present approximation polynomial-time algorithms with performance guarantees.
In 1964 Shapley observed that a matrix has a saddle point in pure strategies whenever every its (Formula presented.) submatrix has one. In contrast, a bimatrix game may have no pure strategy Nash equilibrium (NE) even when every (Formula presented.) subgame has one. Nevertheless, Shapley’s claim can be extended to bimatrix games as follows. We partition all (Formula presented.) bimatrix games into fifteen classes (Formula presented.) depending only on the preferences of two players. A subset (Formula presented.) is called a NE-theorem if a bimatrix game has a NE whenever it contains no subgame from t. We suggest a method to construct all minimal (that is, strongest) NE-theorems based on the procedure of joint generation of transversal hypergraphs given by a special oracle. By this method we obtain all (six) strongest NE-theorems. Let us remark that the suggested approach, which may be called “math-pattern recognition”, is very general: it allows to characterize completely an arbitrary “target” in terms of arbitrary “attributes”.