PaperSwipe

Distributed Riemannian Optimization in Geodesically Non-convex Environments

Published 2 days agoVersion 1arXiv:2512.04915

Authors

Xiuheng Wang, Ricardo Borsoi, Cédric Richard, Ali H. Sayed

Categories

eess.SP

Abstract

This paper studies the problem of distributed Riemannian optimization over a network of agents whose cost functions are geodesically smooth but possibly geodesically non-convex. Extending a well-known distributed optimization strategy called diffusion adaptation to Riemannian manifolds, we show that the resulting algorithm, the Riemannian diffusion adaptation, provably exhibits several desirable behaviors when minimizing a sum of geodesically smooth non-convex functions over manifolds of bounded curvature. More specifically, we establish that the algorithm can approximately achieve network agreement in the sense that Fréchet variance of the iterates among the agents is small. Moreover, the algorithm is guaranteed to converge to a first-order stationary point for general geodesically non-convex cost functions. When the global cost function additionally satisfies the Riemannian Polyak-Lojasiewicz (PL) condition, we also show that it converges linearly under a constant step size up to a steady-state error. Finally, we apply this algorithm to a decentralized robust principal component analysis (PCA) problem formulated on the Grassmann manifold and illustrate its convergence and performance through numerical simulations.

Distributed Riemannian Optimization in Geodesically Non-convex Environments

2 days ago
v1
4 authors

Categories

eess.SP

Abstract

This paper studies the problem of distributed Riemannian optimization over a network of agents whose cost functions are geodesically smooth but possibly geodesically non-convex. Extending a well-known distributed optimization strategy called diffusion adaptation to Riemannian manifolds, we show that the resulting algorithm, the Riemannian diffusion adaptation, provably exhibits several desirable behaviors when minimizing a sum of geodesically smooth non-convex functions over manifolds of bounded curvature. More specifically, we establish that the algorithm can approximately achieve network agreement in the sense that Fréchet variance of the iterates among the agents is small. Moreover, the algorithm is guaranteed to converge to a first-order stationary point for general geodesically non-convex cost functions. When the global cost function additionally satisfies the Riemannian Polyak-Lojasiewicz (PL) condition, we also show that it converges linearly under a constant step size up to a steady-state error. Finally, we apply this algorithm to a decentralized robust principal component analysis (PCA) problem formulated on the Grassmann manifold and illustrate its convergence and performance through numerical simulations.

Authors

Xiuheng Wang, Ricardo Borsoi, Cédric Richard et al. (+1 more)

arXiv ID: 2512.04915
Published Dec 4, 2025

Click to preview the PDF directly in your browser