Designing efficient neural networks for embedded devices is a critical challenge, particularly in applications requiring real-time performance, such as aerial imaging with drones and UAVs for emergency responses. In this work, we introduce TakuNet, a novel light-weight architecture which employs techniques such as depth-wise convolutions and an early downsampling stem to reduce computational complexity while maintaining high accuracy. It leverages dense connections for fast convergence during training and uses 16-bit floating-point precision for optimization on embedded hardware accelerators. Experimental evaluation on two public datasets shows that TakuNet achieves near-state-of-the-art accuracy in classifying aerial images of emergency situations, despite its minimal parameter count. Real-world tests on embedded devices, namely Jetson Orin Nano and Raspberry Pi, confirm TakuNet's efficiency, achieving more than 650 fps on the 15W Jetson board, making it suitable for real-time AI processing on resource-constrained platforms and advancing the applicability of drones in emergency scenarios. The code and implementation details are publicly released.

TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios / Rossi, Daniel; Borghi, Guido; Vezzani, Roberto. - (2025), pp. 339-348. ( 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2025 Tucson (Arizona), USA 28 February - 4 March 2025) [10.1109/WACVW65960.2025.00044].

TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios

Daniel Rossi
Methodology
;
Guido Borghi
Supervision
;
Roberto Vezzani
Supervision
2025

Abstract

Designing efficient neural networks for embedded devices is a critical challenge, particularly in applications requiring real-time performance, such as aerial imaging with drones and UAVs for emergency responses. In this work, we introduce TakuNet, a novel light-weight architecture which employs techniques such as depth-wise convolutions and an early downsampling stem to reduce computational complexity while maintaining high accuracy. It leverages dense connections for fast convergence during training and uses 16-bit floating-point precision for optimization on embedded hardware accelerators. Experimental evaluation on two public datasets shows that TakuNet achieves near-state-of-the-art accuracy in classifying aerial images of emergency situations, despite its minimal parameter count. Real-world tests on embedded devices, namely Jetson Orin Nano and Raspberry Pi, confirm TakuNet's efficiency, achieving more than 650 fps on the 15W Jetson board, making it suitable for real-time AI processing on resource-constrained platforms and advancing the applicability of drones in emergency scenarios. The code and implementation details are publicly released.
2025
28-feb-2025
2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2025
Tucson (Arizona), USA
28 February - 4 March 2025
339
348
Rossi, Daniel; Borghi, Guido; Vezzani, Roberto
TakuNet: an Energy-Efficient CNN for Real-Time Inference on Embedded UAV systems in Emergency Response Scenarios / Rossi, Daniel; Borghi, Guido; Vezzani, Roberto. - (2025), pp. 339-348. ( 2025 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops, WACVW 2025 Tucson (Arizona), USA 28 February - 4 March 2025) [10.1109/WACVW65960.2025.00044].
File in questo prodotto:
File Dimensione Formato  
TakuNet_An_Energy-Efficient_CNN_for_Real-Time_Inference_on_Embedded_UAV_Systems_in_Emergency_Response_Scenarios.pdf

Accesso riservato

Tipologia: VOR - Versione pubblicata dall'editore
Licenza: [IR] closed
Dimensione 1.43 MB
Formato Adobe PDF
1.43 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1374428
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact