U. A. Khan, S. Kar, and J. M. F. Moura, "Higher dimensional consensus: Learning in large-scale networks," IEEE Transactions on Signal Processing, vol. 58, no. 5, pp. 2836-2849, May 2010.


Abstract

The paper considers higher dimensional consensus (HDC). HDC is a general class of linear distributed algorithms for large-scale networks that generalizes average-consensus and includes other interesting distributed algorithms, like sensor localization, leader-follower algorithms in multi-agent systems, or distributed Jacobi algorithm. In HDC, the network nodes are partitioned into `anchors,' nodes whose states are fixed over the HDC iterations, and `sensors,' nodes whose states are updated by the algorithm. The paper starts by briefly considering what we call the \emph{forward} problem by presenting the conditions for HDC to converge, the limiting state to which it converges, and what is its convergence rate. The main focus of the paper is the \emph{inverse} or design problem, i.e., \emph{learning} the weights or parameters of the HDC so that the algorithm converges to a desired pre-specified state. This generalizes the well-known problem of designing the weights in average-consensus. We pose learning as a constrained non-convex optimization problem that we cast in the framework of multi-objective optimization~(MOP) and to which we apply Pareto optimality. We derive the solution to the learning problem by proving relevant properties satisfied by the MOP solutions and by the Pareto front. Finally, the paper shows how the MOP approach leads to interesting tradeoffs (speed of convergence versus performance) arising in resource constrained networks. Simulation studies illustrate our approach for a leader-follower architecture in multi-agent systems.




Back to Publications