Article
A heuristic adaptive fast gradient method in stochastic optimization problems
We consider the problem of computing the closest stable/unstable nonnegative matrix
to a given real matrix. The distance between matrices is measured in the Frobenius norm. The
problem is addressed for two types of stability: the Schur stability (the matrix is stable if its spectral
radius is smaller than one) and the Hurwitz stability (the matrix is stable if its spectral abscissa is
negative). We show that the closest unstable matrix can always be explicitly found. The problem
of computing the closest stable matrix to a nonnegative matrix is a hard problem even if the stable
matrix is not constrained to be nonnegative. Adding the nonnegativity constraint makes the problem
even more dicult. For the closest stable matrix, we present an iterative algorithm which converges
to a local minimum with a linear rate. It is shown that the total number of local minima can be
exponential in the dimension. Numerical results and the complexity estimates are presented.
Contents of the book is divided into 2 parts of deterministic and stochastic models of Operations Research.
The first part of "Deterministic models of Operations Research" - is the base section, in which the emphasis is on linear programming.
The second part - "Stochastic models of Operations Research" includes a model of reliability and queuing models. This is original material.
The textbook can be useful to students of undergraduate and graduate programs in areas of training in "Applied Mathematics", "Applied Mathematics and Computer Science", "Information systems and technologies", as well as graduate students and science teachers who are interested in the problems of optimization in stochastic models
We consider the problem of minimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newton-type incremental method (NIM). The idea of the method is to introduce a model of the objective with the same sum-of-functions structure and further update a single component of the model per iteration. We prove that NIM has a superlinear local convergence rate and linear global convergence rate. Experiments show that the method is very effective for problems with a large number of functions and a small number of variables.