Accurate delineation of maxillofacial anatomy in Cone-Beam Computed Tomography (CBCT) is essential for dental planning but remains difficult due to limited public multi-structure datasets and the runtime cost of 3D deep learning models. We present ToothFairy3, a large-scale CBCT benchmark extending ToothFairy2 with 102 additional fully annotated scans and an expanded taxonomy covering 77 classes, including 32 tooth-specific pulp cavities and small neurovascular structures. ToothFairy3 comprises 582 volumes (over 40k annotated objects), with 532 released with voxel-level labels and 50 held out for leakage-free, server-side evaluation. We also introduce U-Mamba2, an efficient U-Net-style architecture that inserts a Mamba2 state-space block at the bottleneck to capture global context with favorable scaling. Our proposed domain-informed training further improves the learning of maxillofacial anatomies. Across CNN, Transformer, and Mamba baselines, U-Mamba2 achieves competitive Dice/HD95 scores with lower latency and, compared with training on state-of-the-art public CBCT datasets, ToothFairy3-trained models generalize best to the hidden test set, particularly for maxillary structures.
Scaling CBCT Maxillofacial Segmentation to 77 Classes with U-Mamba2 / Lumetti, Luca; Tan, Zhi Qin; Borghi, Lorenzo; Addison, Owen; Li, Yupeng; Rosati, Gabriele; Van Nistelrooij, Niels; Vinayahalingam, Shankeeth; Grana, Costantino; Bolelli, Federico. - (2026). ( 29th International Conference on Medical Image Computing and Computer Assisted Intervention Strasbourg, France Sep 27-Oct 1).
Scaling CBCT Maxillofacial Segmentation to 77 Classes with U-Mamba2
Lumetti, Luca;Tan, Zhi Qin;Borghi, Lorenzo;Rosati, Gabriele;Grana, Costantino;Bolelli, Federico
2026
Abstract
Accurate delineation of maxillofacial anatomy in Cone-Beam Computed Tomography (CBCT) is essential for dental planning but remains difficult due to limited public multi-structure datasets and the runtime cost of 3D deep learning models. We present ToothFairy3, a large-scale CBCT benchmark extending ToothFairy2 with 102 additional fully annotated scans and an expanded taxonomy covering 77 classes, including 32 tooth-specific pulp cavities and small neurovascular structures. ToothFairy3 comprises 582 volumes (over 40k annotated objects), with 532 released with voxel-level labels and 50 held out for leakage-free, server-side evaluation. We also introduce U-Mamba2, an efficient U-Net-style architecture that inserts a Mamba2 state-space block at the bottleneck to capture global context with favorable scaling. Our proposed domain-informed training further improves the learning of maxillofacial anatomies. Across CNN, Transformer, and Mamba baselines, U-Mamba2 achieves competitive Dice/HD95 scores with lower latency and, compared with training on state-of-the-art public CBCT datasets, ToothFairy3-trained models generalize best to the hidden test set, particularly for maxillary structures.Pubblicazioni consigliate

I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris




