We consider two-person zero-sum stochastic mean payoff games with perfect information, or BWR-games, given by a digraph (Formula presented.), with local rewards (Formula presented.), and three types of positions: black (Formula presented.), white (Formula presented.), and random (Formula presented.) forming a partition of V. It is a long-standing open question whether a polynomial time algorithm for BWR-games exists, even when (Formula presented.). In fact, a pseudo-polynomial algorithm for BWR-games would already imply their polynomial solvability. In this short note, we show that BWR-games can be solved via convex programming in pseudo-polynomial time if the number of random positions is a constant.

For a graph *G* and a positive integer *k*, a subset *C* of vertices of *G* is called a *k*-path vertex cover if *C* intersects all paths of *k* vertices in *G*. The cardinality of a minimum *k*-path vertex cover is denoted by *β_{**P**_k*}(*G*). For a graph *G* and a positive integer *k*, a subset *M* of pairwise vertex-disjoint paths of *k* vertices in *G* is called a *k*-path packing. The cardinality of a maximum *k*-path packing is denoted by *μ**_{P**_k}*(*G*). In this paper, we describe some graphs, having equal values of *β_{**P**_k*} and *μ_{**P**_k*}, for *k*≥5, and present polynomial-time algorithms of finding a minimum *k*-path vertex cover and a maximum *k*-path packing in such graphs.

This paper addresses the problem of insufficient performance of statistical classification with the medium-sized database (thousands of classes). Each object is represented as a sequence of independent segments. Each segment is defined as a random sample of independent features with the distribution of multivariate exponential type. To increase the speed of the optimal Kullback-Leibler minimum information discrimination principle, we apply the clustering of the training set and an approximate nearest neighbor search of the input object in a set of cluster medoids. By using the asymptotic properties of the Kullback-Leibler divergence, we propose the maximal likelihood search procedure. In this method the medoid to check is selected from the cluster with the maximal joint density (likelihood) of the distances to the previously checked medoids. Experimental results in image recognition with artificially generated dataset and Essex facial database prove that the proposed approach is much more effective, than an exhaustive search and the known approximate nearest neighbor methods from FLANN and NonMetricSpace libraries.

The task of complete complexity dichotomy is to clearly distinguish between easy and hard cases of a given problem on a family of subproblems. We consider this task for some optimization problems restricted to certain classes of graphs closed under deletion of vertices. A concept in the solution process is based on revealing the so-called “critical” graph classes, which play an important role in the complexity analysis for the family. Recent progress in studying such classes is presented in the article.

The Cell Formation Problem (CFP) consists in an optimal grouping of the given machines and parts into cells, so that machines in every cell process as much as possible parts from this cell (intra-cell operations) and as less as possible parts from another cells (inter-cell operations). The grouping efficacy is the objective function for the CFP which simultaneously maximizes the number of intra-cell operations and minimizes the number of inter-cell operations. Currently there are no exact approaches (known to the authors) suggested for solving the CFP with the grouping efficacy objective. The only exact model which solves the CFP in a restricted formulation is due to Elbenani & Ferland [10]. The restriction consists in fixing the number of production cells. The main difficulty of the CFP is the fractional objective function - the grouping efficacy. In this paper we address this issue for the CFP in its common formulation with a variable number of cells. Our computational experiments are made for the most popular set of 35 benchmark instances. For the 14 of these instances using CPLEX software we prove that the best known solutions are exact global optimums.

Given a graph, the Edge minimum sum-of-squares clustering problem requires finding *p* prototypes (cluster centres) by minimizing the sum of their squared distances from a set of vertices to their nearest prototype, where a prototype can be either a vertex or an inner point of an edge. In this paper we have implemented Variable neighborhood search based heuristic for solving it. We consider three different local search procedures, K-means, J-means, and a new I-means heuristic. Experimental results indicate that the implemented VNS-based heuristic produces the best known results in the literature.

We consider the maximum shortest path interdiction problem by upgrading edges on trees under Hamming distance (denoted by (MSPITH)), which has wide applications in transportation network, networkwar and terrorist network. The problem (MSPITH) aims to maximize the length of the shortest path from the root of a tree to all its leaves by upgrading edge weights such that the upgrade cost under sum-Hamming distance is upper-bounded by a given value. We show that the problem (MSPITH) under weighted sum-Hamming distance is NP-hard. We consider two cases of the problem (MSPITH) under unit sum-Hamming distance based on the number K of critical edges. We propose a greedy algorithm within O(n + l log l) time when K = 1 and a dynamic programming algorithm within O(n(log n + K3)) time when K > 1, where n and l are the numbers of nodes and leaves in a tree, respectively. Furthermore, we consider a minimum cost shortest path interdiction problem by upgrading edges on trees under unit Hamming distance, denoted by (MCSPITUH) and propose a binary search algorithm within O(n4 log n) time, where a dynamic programming algorithm is executed in each iteration to solve its corresponding problem (MSPITH). Finally, we design numerical experiments to show the effectiveness of the algorithms.

Let *A* be an \((m \times n)\) integral matrix, and let \(P=\{ x :A x \le b\}\) be an *n*-dimensional polytope. The width of *P* is defined as \( w(P)=min\{ x\in \mathbb {Z}^n{\setminus }\{0\} :max_{x \in P} x^\top u - min_{x \in P} x^\top v \}\). Let \(\varDelta (A)\) and \(\delta (A)\) denote the greatest and the smallest absolute values of a determinant among all \(r(A) \times r(A)\) sub-matrices of *A*, where *r*(*A*) is the rank of the matrix *A*. We prove that if every \(r(A) \times r(A)\) sub-matrix of *A* has a determinant equal to \(\pm \varDelta (A)\) or 0 and \(w(P)\ge (\varDelta (A)-1)(n+1)\), then *P* contains *n* affine independent integer points. Additionally, we present similar results for the case of *k-modular* matrices. The matrix *A* is called *totally* *k*-modular if every square sub-matrix of *A* has a determinant in the set \(\{0,\, \pm k^r :r \in \mathbb {N} \}\). When *P* is a simplex and \(w(P)\ge \delta (A)-1\), we describe a polynomial time algorithm for finding an integer point in *P*.(k−1)(n+1).

Consider a graph G = (V, E) and a vertex subset A ⊆ V. A vertex v is positive-influence dominated by A if either v is in A or at least half the number of neighbors of v belong to A. For a target vertex subset S ⊆ V, a vertex subset A is a positive-influence target-dominating set for target set S if every vertex in S is positive-influence dominated by A. Given a graph G and a target vertex subset S, the positive-influence target-dominating set (PITD) problem is to find the minimum positive-influence dominating set for target S. In this paper, we show two results: (1) The PITD problem has a polynomial-time (1+log[3/2**Delta*])-approximation in general graphs where *Delta* is the maximum vertex-degree of the input graph. (2) For target set S with |S| = *OMEGA*(|V|), the PITD problem has a polynomial-time O(1)-approximation in power-law graphs.

We present a method for synthesis of optimal control with feedback of nonlinear systems with separated linear part via quadratic criteria. This method is based on a specialmethod of successive approximations, which allows, under standard assumptions, to find optimal control within any finite time interval and to get the procedure of its construction. An example is provided for applying this method for synthesis of control ofa system which is similar to Watt’s centrifugal governor.

Clustering and search techniques are essential to a wide spectrum of applications. Network clustering techniques are becoming common in the analysis of massive data sets arising in various branches of science, engineering, government and industry. In particular, network clustering and search techniques emerge as an important tool in large-scale networks.

This special issue of Optimization Letters contains refereed selected papers presented at the workshop on clustering and search techniques in large scale networks that took place on November 3–8, 2014 at the Laboratory of Algorithms and Technologies for Networks Analysis (LATNA), Higher School of Economics, in Nizhny Novgorod, Russia. The workshop was supported by the Russian Science Foundation Grant RSF 14-41-00039.

This workshop provided a forum for leading as well as beginning researchers and students to discuss recent advances and identify current and future challenges arising in research concerning clustering and search problems in large networks. The papers of this special issue reflect some the problems discussed at the workshop.

We would like to thank the valuable work of authors and reviewers for making this issue possible.

A review of the book in two perspectives: engineering design and data analysis.

The following special case of the classical NP-hard scheduling problem (Formula presented.) is considered. There is a set of jobs (Formula presented.) with identical processing times (Formula presented.) for all jobs (Formula presented.). All jobs have to be processed on a single machine. The optimization criterion is the minimization of maximum lateness (Formula presented.). We analyze algorithms for the makespan problem (Formula presented.), presented by Garey et al. (SIAM J Comput 10(2):256–269, 1981), Simons (A fast algorithm for single processor scheduling. In: 19th Annual symposium on foundations of computer science (Ann Arbor, Mich., 1978, 1978) and Benson’s algorithm (J Glob Optim 13(1):1–24, 1998) and give two polynomial algorithms to solve the problem under consideration and to construct the Pareto set with respect to the criteria (Formula presented.) and (Formula presented.). The complexity of the presented algorithms is (Formula presented.) and (Formula presented.), respectively, where (Formula presented.) is the accuracy of the input-output parameters. © 2016 Springer-Verlag Berlin Heidelberg

This paper investigates the scheduling problems of a single serial-batching machine with independent setup time and deteriorating job processing times. With the assumption of deteriorating jobs, the job processing times are described by an increasing function of their starting times. All the jobs are first partitioned into serial batches and then processed on a single serial-batching machine. Before each batch is processed, an independent constant setup time is required. Two optimization algorithms are proposed to solve the problems of minimizing the makespan and the total number of tardy jobs, respectively. Specifically, for the problem of minimizing the total completion time, two special cases with the smallest and the largest number of batches are studied, and an optimization algorithm is also presented for the special case without setup time.

We describe an algorithm for the maximum clique problem that is parameterized by the graph’s degeneracy d. The algorithm runs in O(nm+nTd) time, where Td is the time to solve the maximum clique problem in an arbitrary graph on d vertices. The best bound as of now is Td=O(2d/4)by Robson. This shows that the maximum clique problem is solvable in O(nm) time in graphs for which d≤4log2m+O(1). The analysis of the algorithm’s runtime is simple; the algorithm is easy to implement when given a subroutine for solving maximum clique in small graphs; it is easy to parallelize. In the case of Bianconi-Marsili power-law random graphs, it runs in 2O(√n)time with high probability. We extend the approach for a graph invariant based on common neighbors, generating a second algorithm that has a smaller exponent at the cost of a larger polynomial factor.

The quadratic programming problem is known to be NP-hard for Hessian matrices with only one negative eigenvalue, but it is tractable for convex instances. These facts yield to consider the number of negative eigenvalues as a complexity measure of quadratic programs. We prove here that the clique problem is tractable for two variants of its Motzkin-Strauss quadratic formulation with a fixed number of negative eigenvalues (with multiplicities).

The coloring problem is studied in the paper for graph classes defined by two small forbidden induced subgraphs. We prove some sufficient conditions for effective solvability of the problem in such classes. As their corollary we determine the computational complexity for all sets of two connected forbidden induced subgraphs with at most five vertices except 13 explicitly enumerated cases.

In this paper, we show that the weighted vertex coloring problem can be solved in polynomial on the sum of vertex weights time for {P_5, K_{2,3}, K_{2,3}^+}-free graphs. As a corollary, this fact implies polynomial-time solvability of the unweighted vertex coloring problem for {P_5, K_{2,3}, K_{2,3}^+}-free graphs. As usual, P_5 and K_{2,3} stands, respectively, for the simple path on 5 vertices and for the biclique with the parts of 2 and 3 vertices, K_{2,3} denotes the graph, obtained from a K_{2,3} by joining its degree 3 vertices with an edge.

Increasing the number of computational cores is a primary way of achieving the high performance of contemporary supercomputers. However, developing parallel applications capable to harness the enormous amount of cores is a challenging task. It is very important to understand the principle limitations of the scalability of parallel applications imposed by the algorithm’s structure. The tree search addressed in this paper has an irregular structure unknown prior to computations. That is why such algorithms are challenging for parallel implementation especially on distributed memory systems. In this paper, we propose a parallel tree search algorithm aimed at distributed memory parallel computers. For this parallel algorithm, we analyze its scalability and show that it is close to the theoretical maximum.

The vertex colourability problem is to determine, for a given graph and a given natural *k*, whether it is possible to split the graph’s vertex set into at most *k* subsets, each of pairwise non-adjacent vertices, or not. A hereditary class is a set of simple graphs, closed under deletion of vertices. Any such a class can be defined by the set of its forbidden induced subgraphs. For all but four hereditary classes, defined by a pair of connected five-vertex induced prohibitions, the complexity status of the vertex colourability problem is known. In this paper, we reduce the number of the open cases to three, by showing polynomial-time solvability of the problem for {claw,butterfly}{claw,butterfly}-free graphs. A *claw* is the star graph with three leaves, a *butterfly* is obtained by coinciding a vertex in a triangle with a vertex in another triangle.