Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released.
Multi-Category Mesh Reconstruction From Image Collections / Simoni, Alessandro; Pini, Stefano; Vezzani, Roberto; Cucchiara, Rita. - (2021), pp. 1321-1330. (Intervento presentato al convegno 9th International Conference on 3D Vision, 3DV 2021 tenutosi a Online nel 1-3 December 2021) [10.1109/3DV53792.2021.00139].
Multi-Category Mesh Reconstruction From Image Collections
Alessandro Simoni;Stefano Pini;Roberto Vezzani;Rita Cucchiara
2021
Abstract
Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released.File | Dimensione | Formato | |
---|---|---|---|
3DV 2021.pdf
Open access
Descrizione: Articolo
Tipologia:
Versione originale dell'autore proposta per la pubblicazione
Dimensione
6.76 MB
Formato
Adobe PDF
|
6.76 MB | Adobe PDF | Visualizza/Apri |
3DV 2021 (suppl).pdf
Open access
Descrizione: Supplementary
Tipologia:
Versione originale dell'autore proposta per la pubblicazione
Dimensione
14.5 MB
Formato
Adobe PDF
|
14.5 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris