Proximal-gradient methods are widely employed tools in imaging that can be accelerated by adopting variable metrics and/or extrapolation steps. One crucial issue is the inexact computation of the proximal operator, often implemented through a nested primal–dual solver, which represents the main computational bottleneck whenever an increasing accuracy in the computation is required. In this paper, we propose a nested primal–dual method for the efficient solution of regularized convex optimization problems. Our proposed method approximates a variable metric proximal–gradient step with extrapolation by performing a prefixed number of primal-dual iterates, while adjusting the steplength parameter through an appropriate backtracking procedure. Choosing a prefixed number of inner iterations allows the algorithm to keep the computational cost per iteration low. We prove the convergence of the iterates sequence towards a solution of the problem, under a relaxed monotonicity assumption on the scaling matrices and a shrinking condition on the extrapolation parameters. Furthermore, we investigate the numerical performance of our proposed method by equipping it with a scaling matrix inspired by the Iterated Tikhonov method. The numerical results show that the combination of such scaling matrices and Nesterov-like extrapolation parameters yields an effective acceleration towards the solution of the problem.
A nested primal-dual iterated Tikhonov method for regularized convex optimization / Aleotti, Stefano; Bonettini, Silvia; Donatelli, Marco; Prato, Marco; Rebegoldi, Simone. - In: COMPUTATIONAL OPTIMIZATION AND APPLICATIONS. - ISSN 0926-6003. - (2024), pp. 1-40. [10.1007/s10589-024-00613-4]
A nested primal-dual iterated Tikhonov method for regularized convex optimization
Silvia Bonettini;Marco Prato;Simone Rebegoldi
2024
Abstract
Proximal-gradient methods are widely employed tools in imaging that can be accelerated by adopting variable metrics and/or extrapolation steps. One crucial issue is the inexact computation of the proximal operator, often implemented through a nested primal–dual solver, which represents the main computational bottleneck whenever an increasing accuracy in the computation is required. In this paper, we propose a nested primal–dual method for the efficient solution of regularized convex optimization problems. Our proposed method approximates a variable metric proximal–gradient step with extrapolation by performing a prefixed number of primal-dual iterates, while adjusting the steplength parameter through an appropriate backtracking procedure. Choosing a prefixed number of inner iterations allows the algorithm to keep the computational cost per iteration low. We prove the convergence of the iterates sequence towards a solution of the problem, under a relaxed monotonicity assumption on the scaling matrices and a shrinking condition on the extrapolation parameters. Furthermore, we investigate the numerical performance of our proposed method by equipping it with a scaling matrix inspired by the Iterated Tikhonov method. The numerical results show that the combination of such scaling matrices and Nesterov-like extrapolation parameters yields an effective acceleration towards the solution of the problem.Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris