### Article

## An improved optimization algorithm of ads’ allocation in sponsored search and the results of experiments

This paper describes a modified version of an algorithm suggested earlier by the authors for optimizing of ads allocation in sponsored search on the main page of search results in response to user search queries to a web search engine. It is demonstrated that the modification of the algorithm reduces appreciably the searching procedure and the algorithm complexity. And finally, the new algorithm undergoes experimental testing on real data sets provided by Yandex. © 2015, Pleiades Publishing, Ltd.

This book constitutes the proceedings of the 13th International Computer Science Symposium in Russia, CSR 2018, held in Moscow, Russia, in May 2018.

The 24 full papers presented together with 7 invited lectures were carefully reviewed and selected from 42 submissions. The papers cover a wide range of topics such as algorithms and data structures; combinatorial optimization; constraint solving; computational complexity; cryptography; combinatorics in computer science; formal languages and automata; algorithms for concurrent and distributed systems; networks; and proof theory and applications of logic to computer science.

It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in EXPNP, or even in EXP that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are *selfreducible*? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that EXPNP does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that NEXP does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of EXP is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for NEXP.

Over the past years, there is a deep interest in the analysis of different communities and complex networks. Identification of the most important elements in such networks is one of the main areas of research. However, the heterogeneity of real networks makes the problem both important and problematic. The application of SRIC and LRIC indices can be used to solve the problem since they take into account the individual properties of nodes, the possibility of their group influence, and topological structure of the whole network. However, the computational complexity of such indices needs further consideration. Our main focus is on the performance of SRIC and LRIC indices. We propose several modes on how to decrease the computational complexity of these indices. The runtime comparison of the sequential and parallel computation of the proposed models is also given.

Mathematical models of distributed computing based on the calculus of mobile processes ($\pi$-calculus) are widely used for checking the information security properties of cryptographic protocols. Since $\pi$calculus is Turing-complete, this problem is undecidable in general case. Therefore, the research is carried out only for some special classes of $\pi$-calculus processes with restricted computational capabilities, for example, for non-recursive processes, in which all runs have a bounded length, for processes with a bounded number of parallel components, etc. However, even in these cases, the proposed checking procedures are time consuming. We assume that this is due to the very nature of the $\pi$ -calculus processes. The goal of this paper is to show that even for the weakest model of passive adversary and for relatively simple protocols that use only the basic $\pi$-calculus operations, the task of checking the information security properties of these protocols is co-NP-complete.

We consider a computational model which is known as set automata.

The set automata are one-way finite automata with an additional storage—the set. There are two kinds of set automata—the deterministic and the nondeterministic ones. We denote them as DSA and NSA respectively. The model was introduced by Kutrib et al. in 2014 in [2, 3].

In this paper we characterize algorithmic complexity of the emptiness and membership problems for set automata. More definitely, we prove that both problems are PSPACEPSPACE-complete for both kinds of set automata.

The study has been funded by the Russian Academic Excellence Project ‘5-100’. Supported in part by RFBR grants 16–01–00362 and 17–51-10005.

A perfect 2-matching in an undirected graph G=(V,E) is a function x:E→0,1,2 such that for each node v∈V the sum of values x(e) on all edges e incident to v equals 2. If supp(x)=e∈E∣x(e)≠0 contains no triangles then x is called triangle-free. Polyhedrally speaking, triangle-free 2-matchings are harder than 2-matchings, but easier than usual 1-matchings. Given edge costs c:E→R + , a natural combinatorial problem consists in finding a perfect triangle-free matching of minimum total cost. For this problem, Cornuéjols and Pulleyblank devised a combinatorial strongly-polynomial algorithm, which can be implemented to run in O(VElogV) time. (Here we write V, E to indicate their cardinalities |V|, |E|.) If edge costs are integers in range [0,C] then for both 1- and 2-matchings some faster scaling algorithms are known that find optimal solutions within O(Vα(E,V)logVElog(VC)) and O(VElog(VC)) time, respectively, where α denotes the inverse Ackermann function. So far, no efficient cost-scaling algorithm is known for finding a minimum-cost perfect triangle-free2-matching. The present paper fills this gap by presenting such an algorithm with time complexity of O(VElogVlog(VC)).

We study the following computational problem: for which values of k, the majority of n bits MAJn can be computed with a depth two formula whose each gate computes a majority function of at most k bits? The corresponding computational model is denoted by MAJk o MAJk. We observe that the minimum value of k for which there exists a MAJk o MAJk circuit that has high correlation with the majority of n bits is equal to Θ(n1/2). We then show that for a randomized MAJk o MAJk circuit computing the majority of n input bits with high probability for every input, the minimum value of k is equal to n2/3+o(1). We show a worst case lower bound: if a MAJk o MAJk circuit computes the majority of n bits correctly on all inputs, then k ≥ n13/19+o(1). This lower bound exceeds the optimal value for randomized circuits and thus is unreachable for pure randomized techniques. For depth 3 circuits we show that a circuit with k = O(n2/3) can compute MAJn correctly on all inputs.

This book constitutes the refereed proceedings of the 44th International Conference on Current Trends in Theory and Practice of Computer Science, SOFSEM 2018, held in Krems, Austria, in January/February 2018. The 48 papers presented in this volume were carefully reviewed and selected from 97 submissions. They were organized in topical sections named: foundations of computer science; software engineering: advances methods, applications, and tools; data, information and knowledge engineering; network science and parameterized complexity; model-based software engineering; computational models and complexity; software quality assurance and transformation; graph structure and computation; business processes, protocols, and mobile networks; mobile robots and server systems; automata, complexity, completeness; recognition and generation; optimization, probabilistic analysis, and sorting; filters, configurations, and picture encoding; machine learning; text searching algorithms; and data model engineering.