Intraoperative ultrasound (iUS) is increasingly used in neurosurgery to monitor tumour margins during resection. The adoption of iUS is still limited by low image quality, noise, and heterogeneous echogenicity, which makes surgeons’ interpretation of surgical margins challenging. While deep learning can aid automatic margin delineation, the lack of annotated datasets limits the development of robust methods. To address this challenge, we propose a two-step generative framework based on latent diffusion models that consist of (i) an unconditional tumour-mask generator that learns geometric features of real tumours, and (ii) a conditional iUS image generator that synthesizes realistic iUS images by using the generated tumour masks as a prior. Morphological fidelity is assessed through tailored quantitative and qualitative metrics. The performance of automatic tumour margin segmentation algorithms is evaluated through data augmentation experiments to determine whether the inclusion of synthetic data can improve segmentation performance. Compared to state-of-the-art conditional generative models, including diffusion-based approaches (ControlNet) and generative adversarial networks (Pix2Pix), the proposed framework achieves superior qualitative and quantitative performance in representing tumoural and non-tumoural tissue. Performance evaluated using a 5-fold cross-validation protocol yields statistically significant improvements in morphological fidelity (Dice Similarity Coefficient: 0.851; Hausdorff Distance: 16.21). The analysis shows that introducing synthetic data significantly improves boundary delineation performance using nn-UNet, reducing the average Hausdorff Distance from 33.97 to 30.72 in the test set. These results indicate that the proposed framework helps mitigate the scarcity of annotated iUS data by providing realistic samples to support training in neurosurgical image segmentation.
Two-step latent diffusion modelling for morphology-guided synthesis of glioma intraoperative ultrasound images / Lasala, A.; Fiorentino, M. C.; Bandini, A.; Moccia, S.; Giannarou, S.. - In: BIOMEDICAL SIGNAL PROCESSING AND CONTROL. - ISSN 1746-8094. - 120:(2026), pp. 1-13. [10.1016/j.bspc.2026.110037]
Two-step latent diffusion modelling for morphology-guided synthesis of glioma intraoperative ultrasound images
Bandini A.;
2026
Abstract
Intraoperative ultrasound (iUS) is increasingly used in neurosurgery to monitor tumour margins during resection. The adoption of iUS is still limited by low image quality, noise, and heterogeneous echogenicity, which makes surgeons’ interpretation of surgical margins challenging. While deep learning can aid automatic margin delineation, the lack of annotated datasets limits the development of robust methods. To address this challenge, we propose a two-step generative framework based on latent diffusion models that consist of (i) an unconditional tumour-mask generator that learns geometric features of real tumours, and (ii) a conditional iUS image generator that synthesizes realistic iUS images by using the generated tumour masks as a prior. Morphological fidelity is assessed through tailored quantitative and qualitative metrics. The performance of automatic tumour margin segmentation algorithms is evaluated through data augmentation experiments to determine whether the inclusion of synthetic data can improve segmentation performance. Compared to state-of-the-art conditional generative models, including diffusion-based approaches (ControlNet) and generative adversarial networks (Pix2Pix), the proposed framework achieves superior qualitative and quantitative performance in representing tumoural and non-tumoural tissue. Performance evaluated using a 5-fold cross-validation protocol yields statistically significant improvements in morphological fidelity (Dice Similarity Coefficient: 0.851; Hausdorff Distance: 16.21). The analysis shows that introducing synthetic data significantly improves boundary delineation performance using nn-UNet, reducing the average Hausdorff Distance from 33.97 to 30.72 in the test set. These results indicate that the proposed framework helps mitigate the scarcity of annotated iUS data by providing realistic samples to support training in neurosurgical image segmentation.| File | Dimensione | Formato | |
|---|---|---|---|
|
2026_Lasala_BSPC.pdf
Accesso riservato
Tipologia:
VOR - Versione pubblicata dall'editore
Dimensione
10.48 MB
Formato
Adobe PDF
|
10.48 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate

I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris




