Optimal distributed convex optimization on slowly time-varying graphs
We study optimal distributed first-order optimization algorithms when the network (i.e., communication constraints between the agents) changes with time. This problem is motivated by scenarios where agents experience network malfunctions. We provide a sufficient condition that guarantees a convergence rate with optimal (up to logarithmic terms) dependencies on the network and function parameters if the network changes are constrained to a small percentage α of the total number of iterations. We call such networks slowly time-varying networks. Moreover, we show that Nesterov's method has an iteration complexity of Ω((√κΦ · χ̃ + α log(κ Φ · χ̃)) log(1/ε)) for decentralized algorithms, where κ Φ is the condition number of the objective function, and χ̃ is a worst case bound on the condition number of the sequence of communication graphs. Additionally, we provide an explicit upper bound on α in terms of the condition number of the objective function and network topologies.