An accelerated exact distributed first-order algorithm for optimization over directed networks |
| |
Institution: | 1. Department of Computer Science, Xinzhou Teachers University, Xinzhou 034000, PR China;2. Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, College of Electronic and Information Engineering, Southwest University, Chongqing 400715, PR China;3. Westa College, Southwest University, Chongqing 400715, PR China;1. Department of Mathematics and Statistics, Cutin University, Perth, WA 6845, Australia;2. School of Mathematical Sciences, Chongqing Normal University, Chongqing 400047, China |
| |
Abstract: | Distributed optimization over networked agents has emerged as an advanced paradigm to address large-scale control, optimization, and signal-processing problems. In the last few years, the distributed first-order gradient methods have witnessed significant progress and enrichment due to the simplicity of using only the first derivatives of local functions. An exact first-order algorithm is developed in this work for distributed optimization over general directed networks with only row-stochastic weighted matrices. It employs the rescaling gradient method to address unbalanced information diffusion among agents, where the weights on the received information can be arbitrarily assigned. Moreover, uncoordinated step-sizes are employed to magnify the autonomy of agents, and an error compensation term and a heavy-ball momentum are incorporated to accelerate convergency. A linear convergence rate is rigorously proven for strongly-convex objective functions with Lipschitz continuous gradients. Explicit upper bounds of step-size and momentum parameter are provided. Finally, simulations illustrate the performance of the proposed algorithm. |
| |
Keywords: | |
本文献已被 ScienceDirect 等数据库收录! |
|