The problem of nding the closest stable matrix for a dynamical system has many

applications. It is studied for both continuous and discrete-time systems and the corresponding

optimization problems are formulated for various matrix norms. As a rule, nonconvexity of these

formulations does not allow nding their global solutions. In this paper, we analyze positive discretetime

systems. They also suer from nonconvexity of the stability region, and the problem in the

Frobenius norm or in the Euclidean norm remains hard for them. However, it turns out that for

certain polyhedral norms, the situation is much better. We show that for the distances measured in

the max-norm, we can nd an exact solution of the corresponding nonconvex projection problems

in polynomial time. For the distance measured in the operator `1-norm or `1-norm, the exact

solution is also eciently found. To this end, we develop a modication of the recently introduced

spectral simplex method. On the other hand, for all these three norms, we obtain exact descriptions

of the region of stability around a given stable matrix. In the case of the max-norm, this can be

seen as an extension onto the class of nonnegative matrices, the Kharitonov theorem, providing a

stability criterion for polynomials with interval coecients [V. L Kharitonov, Dier. Uravn., 14

(1978), pp. 2086{2088; K. Panneerselvam and R. Ayyagari, Internat. J. Control Sci. Engrg., 3

(2013), pp. 81{85]. For practical implementation of our technique, we developed a new method for

approximating the maximal eigenvalue of a nonnegative matrix. It combines the local quadratic rate

of convergence with polynomial-time global performance guarantees.

We generalize the recent invariant polytope algorithm for computing the joint spectral radius and extend it to a wider class of matrix sets. This, in particular, makes the algorithm applicable to sets of matrices that have finitely many spectrum maximizing products. A criterion of convergence of the algorithm is proved. As an application we solve two challenging computational open problems. First we find the regularity of the Butterfly subdivision scheme for various parameters $\omega$. In the “most regular” case $\omega = \frac{1}{16}$, we prove that the limit function has Holder exponent 2 and its derivative is “almost Lipschitz” with logarithmic factor 2. Second we compute the Holder exponent of Daubechies wavelets of high order.

We develop a matrix approach to the Maximal Acyclic Subgraph (MAS) problem by reducing it to finding the closest nilpotent matrix to the matrix of the graph. Using recent results on the closest Schur stable systems and on minimising the spectral radius over special sets of non-negative matrices we obtain an algorithm for finding an approximate solution of MAS. Numerical results for graphs from $50$ to $1500$ vertices demonstrate its fast convergence and give the rate of approximation in most cases larger than $0.6$. The same method gives the precise solution for the following weakened version of MAS: find the minimal $r$ such that the graph can be made acyclic by cutting at most~$r$ incoming edges from each vertex. Several modifications, when each vertex is assigned with its own maximal number~$r_i$ of cut edges, when some of edges are ``untouchable'', are also considered. Some applications are discussed.

We consider the problem of computing the closest stable/unstable nonnegative matrix

to a given real matrix. The distance between matrices is measured in the Frobenius norm. The

problem is addressed for two types of stability: the Schur stability (the matrix is stable if its spectral

radius is smaller than one) and the Hurwitz stability (the matrix is stable if its spectral abscissa is

negative). We show that the closest unstable matrix can always be explicitly found. The problem

of computing the closest stable matrix to a nonnegative matrix is a hard problem even if the stable

matrix is not constrained to be nonnegative. Adding the nonnegativity constraint makes the problem

even more dicult. For the closest stable matrix, we present an iterative algorithm which converges

to a local minimum with a linear rate. It is shown that the total number of local minima can be

exponential in the dimension. Numerical results and the complexity estimates are presented.

We suggest a new approach to finding the maximal and the minimal spectral radii of linear operators from a given compact family of operators, which share a common invariant cone (e.g., family of nonnegative matrices). In the case of families with the so-called product structure, this leads to efficient algorithms for optimizing the spectral radius and for finding the joint and lower spectral radii of the family. Applications to the theory of difference equations and to problems of optimizing the spectral radius of graphs are considered.

Research Article

We study the location of λ2(A), the second positive eigenvalue of a nonnegative matrix A, as the issue of how many positive eigenvalues can be shifted beyond the spectral radius ρ(A) by means of arbitrary changes in elements of one row. The notion of rank-one correction suggests the nearest generalization expanding the changes in one row to any matrix of rank one (still keeping the matrix nonnegative). The main theorem limits the number of those eigenvalues, counting multiplicities, to the increased spectral radius alone. In matrix population models, we treat the projection matrix L = T + F as the rank-one correction of its transition part T by the fertility one F. The matrix T is column substochastic due to its demographic interpretation, hence we conclude that λ2(L) ≤ 1 and specify the rare cases where λ2(L) = 1. The location λ2(L) < 1 ensures that the function R(L) = 1 − det (I − L) has the indicator property, namely, its value is always located on the same side of 1 as is ρ(L). This indicator does not pose any computational problems and helps calibrate L from empirical data.