In this article, we propose a novel solution for nonconvex problems of multiple variables, especially for those typically solved by an alternating minimization (AM) strategy that splits the original optimization problem into a set of subproblems corresponding to each variable and then iteratively optimizes each subproblem using a fixed updating rule. However, due to the intrinsic nonconvexity of the original optimization problem, the optimization can be trapped into a spurious local minimum even when each subproblem can be optimally solved at each iteration. Meanwhile, learning-based approaches, such as deep unfolding algorithms, have gained popularity for nonconvex optimization; however, they are highly limited by the availability of labeled data and insufficient explainability. To tackle these issues, we propose a meta-learning based alternating minimization (MLAM) method that aims to minimize a part of the global losses over iterations instead of carrying minimization on each subproblem, and it tends to learn an adaptive strategy to replace the handcrafted counterpart resulting in advance on superior performance. The proposed MLAM maintains the original algorithmic principle, providing certain interpretability. We evaluate the proposed method on two representative problems, namely, bilinear inverse problem: matrix completion and nonlinear problem: Gaussian mixture models. The experimental results validate the proposed approach outperforms AM-based methods.

Metalearning-Based Alternating Minimization Algorithm for Nonconvex Optimization / Xia, J.; Li, S.; Huang, J.; Yang, Z.; Jaimoukha, I. M.; Gunduz, D.. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-237X. - 34:9(2023), pp. 5366-5380. [10.1109/TNNLS.2022.3165627]

Metalearning-Based Alternating Minimization Algorithm for Nonconvex Optimization

Gunduz D.
2023

Abstract

In this article, we propose a novel solution for nonconvex problems of multiple variables, especially for those typically solved by an alternating minimization (AM) strategy that splits the original optimization problem into a set of subproblems corresponding to each variable and then iteratively optimizes each subproblem using a fixed updating rule. However, due to the intrinsic nonconvexity of the original optimization problem, the optimization can be trapped into a spurious local minimum even when each subproblem can be optimally solved at each iteration. Meanwhile, learning-based approaches, such as deep unfolding algorithms, have gained popularity for nonconvex optimization; however, they are highly limited by the availability of labeled data and insufficient explainability. To tackle these issues, we propose a meta-learning based alternating minimization (MLAM) method that aims to minimize a part of the global losses over iterations instead of carrying minimization on each subproblem, and it tends to learn an adaptive strategy to replace the handcrafted counterpart resulting in advance on superior performance. The proposed MLAM maintains the original algorithmic principle, providing certain interpretability. We evaluate the proposed method on two representative problems, namely, bilinear inverse problem: matrix completion and nonlinear problem: Gaussian mixture models. The experimental results validate the proposed approach outperforms AM-based methods.
2023
19-apr-2022
34
9
5366
5380
Metalearning-Based Alternating Minimization Algorithm for Nonconvex Optimization / Xia, J.; Li, S.; Huang, J.; Yang, Z.; Jaimoukha, I. M.; Gunduz, D.. - In: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. - ISSN 2162-237X. - 34:9(2023), pp. 5366-5380. [10.1109/TNNLS.2022.3165627]
Xia, J.; Li, S.; Huang, J.; Yang, Z.; Jaimoukha, I. M.; Gunduz, D.
File in questo prodotto:
File Dimensione Formato  
Metalearning-Based_Alternating_Minimization_Algorithm_for_Nonconvex_Optimization.pdf

Accesso riservato

Tipologia: Versione pubblicata dall'editore
Dimensione 2.54 MB
Formato Adobe PDF
2.54 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1280032
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 11
social impact