• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Of all publications in the section: 6
Sort:
by name
by year
Article
Gurvich V., Boros E., Manthey B. et al. Algorithmica. 2018. Vol. 80. No. 11. P. 3132-3157.

We consider two-player zero-sum stochastic mean payoff games with perfect information. We show that any such game, with a constant number of random positions and polynomially bounded positive transition probabilities, admits a polynomial time approximation scheme, both in the relative and absolute sense.

Added: Nov 2, 2017
Article
Babenko M. A. Algorithmica. 2012. Vol. 64. No. 3. P. 362-383.

Given a digraph $G = (VG,AG)$, an even factor $M \subseteq AG$ is a set formed by node-disjoint paths and even cycles. Even factors in digraphs were introduced by Geelen and Cunningham and generalize path matchings in undirected graphs. Finding an even factor of maximum cardinality in a general digraph is known to be NP-hard but for the class of odd-cycle symmetric digraphs the problem is polynomially solvable.  So far the only combinatorial algorithm known for this task is due to Pap; its running time is $O(n^4)$ (hereinafter $n$ denotes the number of nodes in $G$ and $m$ denotes the number of arcs or edges). In this paper we introduce a novel sparse recovery technique and devise an $O(n^3 \log n)$-time algorithm for finding a maximum cardinality even factor in an odd-cycle symmetric digraph. Our technique also applies to other similar problems, e.g. finding a maximum cardinality square-free simple $b$-matching.

Added: Jan 27, 2013
Article
Bliznets Ivan, Fomin F., Pilipczuk M. et al. Algorithmica. 2016. Vol. 76. No. 2. P. 569-594.

We prove that in a graph with n vertices, induced chordal and interval subgraphs with the maximum number of vertices can be found in time 2^(𝜆𝑛) for some 𝜆<1. These are the first algorithms breaking the trivial 2^𝑛 poly(n) bound of the brute-force search for these problems.

Added: Oct 26, 2018
Article
Bliznets Ivan, Fomin F., Golovach P. et al. Algorithmica. 2017. Vol. 79. No. 3. P. 798-813.

In the Shortest Superstring problem we are given a set of strings S=\{s_1, \ldots , s_n\} and integer \ell and the question is to decide whether there is a superstring s of length at most \ellcontaining all strings of S as substrings. We obtain several parameterized algorithms and complexity results for this problem. In particular, we give an algorithm which in time 2^{\mathcal {O}(k)} {\text {poly}}(n) finds a superstring of length at most \ell containing at least k strings of S. We complement this by a lower bound showing that such a parameterization does not admit a polynomial kernel up to some complexity assumption. We also obtain several results about “below guaranteed values” parameterization of the problem. We show that parameterization by compression admits a polynomial kernel while parameterization “below matching” is hard.

Added: Oct 29, 2018
Article
Buhrman H., Torenvliet L., Unger F. et al. Algorithmica. 2019. Vol. 81. No. 1. P. 179-200.

It is well-known that the class of sets that can be computed by polynomial size circuits is equal to the class of sets that are polynomial time reducible to a sparse set. It is widely believed, but unfortunately up to now unproven, that there are sets in EXPNP, or even in EXP that are not computable by polynomial size circuits and hence are not reducible to a sparse set. In this paper we study this question in a more restricted setting: what is the computational complexity of sparse sets that are selfreducible? It follows from earlier work of Lozano and Torán (in: Mathematical systems theory, 1991) that EXPNP does not have sparse selfreducible hard sets. We define a natural version of selfreduction, tree-selfreducibility, and show that NEXP does not have sparse tree-selfreducible hard sets. We also construct an oracle relative to which all of EXP is reducible to a sparse tree-selfreducible set. These lower bounds are corollaries of more general results about the computational complexity of sparse sets that are selfreducible, and can be interpreted as super-polynomial circuit lower bounds for NEXP.

Added: Sep 12, 2018
Article
Brody J., Buhrman H., Koucký M. et al. Algorithmica. 2016. Vol. 76. No. 3. P. 749-781.

 Newman's theorem states that we can take any public-coin   communication protocol and convert it into one that uses only   private randomness with only a little increase in communication   complexity. We consider a reversed scenario in the context of   information complexity: can we take a protocol that uses private   randomness and convert it into one that only uses public randomness   while preserving the information revealed to each player?   We prove that the answer is yes, at least for protocols that use a   bounded number of rounds. As an application, we prove new direct sum   theorems through the compression of interactive communication in the   bounded-round setting. Furthermore, we show that if a Reverse   Newman's Theorem can be proven in full generality, then full   compression of interactive communication and fully-general   direct-sum theorems will result.

Added: Mar 2, 2016