Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released.

Multi-Category Mesh Reconstruction From Image Collections / Simoni, Alessandro; Pini, Stefano; Vezzani, Roberto; Cucchiara, Rita. - (2021). ((Intervento presentato al convegno 9th International Conference on 3D Vision tenutosi a Online nel 1-3 December 2021.

Multi-Category Mesh Reconstruction From Image Collections

Alessandro Simoni;Stefano Pini;Roberto Vezzani;Rita Cucchiara
2021

Abstract

Recently, learning frameworks have shown the capability of inferring the accurate shape, pose, and texture of an object from a single RGB image. However, current methods are trained on image collections of a single category in order to exploit specific priors, and they often make use of category-specific 3D templates. In this paper, we present an alternative approach that infers the textured mesh of objects combining a series of deformable 3D models and a set of instance-specific deformation, pose, and texture. Differently from previous works, our method is trained with images of multiple object categories using only foreground masks and rough camera poses as supervision. Without specific 3D templates, the framework learns category-level models which are deformed to recover the 3D shape of the depicted object. The instance-specific deformations are predicted independently for each vertex of the learned 3D mesh, enabling the dynamic subdivision of the mesh during the training process. Experiments show that the proposed framework can distinguish between different object categories and learn category-specific shape priors in an unsupervised manner. Predicted shapes are smooth and can leverage from multiple steps of subdivision during the training process, obtaining comparable or state-of-the-art results on two public datasets. Models and code are publicly released.
9th International Conference on 3D Vision
Online
1-3 December 2021
Simoni, Alessandro; Pini, Stefano; Vezzani, Roberto; Cucchiara, Rita
Multi-Category Mesh Reconstruction From Image Collections / Simoni, Alessandro; Pini, Stefano; Vezzani, Roberto; Cucchiara, Rita. - (2021). ((Intervento presentato al convegno 9th International Conference on 3D Vision tenutosi a Online nel 1-3 December 2021.
File in questo prodotto:
File Dimensione Formato  
3DV 2021.pdf

accesso aperto

Descrizione: Articolo
Tipologia: Pre-print dell'autore (bozza pre referaggio)
Dimensione 6.76 MB
Formato Adobe PDF
6.76 MB Adobe PDF Visualizza/Apri
3DV 2021 (suppl).pdf

accesso aperto

Descrizione: Supplementary
Tipologia: Pre-print dell'autore (bozza pre referaggio)
Dimensione 14.5 MB
Formato Adobe PDF
14.5 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Caricamento pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1254336
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact