skip to content

Classic first-order optimization methods on a submanifold of a Euclidean space are based on two ingredients: (i) choosing a search direction at the current iterate and (ii) applying a retraction to produce points along a manifold-valued curve tangent to the search direction. It is now well established that ingredient (ii) can be computationally considerably more costly than ingredient (i) even for well-known manifolds such as the Stiefel manifold, notably in a stochastic gradient context where (i) can be particularly cheap. This has prompted the development of infeasible optimization methods that do not enforce the manifold constraint at the iterates but still exploit the manifold nature of the feasible set. The landing approach, which is the topic of this talk, belongs to this recent trend. It performs an update along a weighted sum of two terms. One term, tangent to the active "layered manifold", decreases the objective function while preserving the constraint function at the first order. The other term decreases infeasibility. Under mild assumptions and with sufficiently small step size, the method provably converges to critical points. Moreover, in the stochastic case where the update vector is affected by additive zero-mean bounded-variance noise, the landing algorithm with suitably diminishing step size is proved to converge in expectation. This talk is based on joint work with Pierre Ablin, Bin Gao and Simon Vary (arXiv:2303.16510, arXiv:2405.01702).

Further information

Time:

30Oct
Oct 30th 2025
15:00 to 16:00

Venue:

Centre for Mathematical Sciences, MR14

Speaker:

Pierre-Antoine Absil (University of Louvain)

Series:

Applied and Computational Analysis