Context-aware middlewares support applications with context management. Current middlewares support both hardware and software sensors providing data in structured forms (e.g., temperature, wind, and smoke sensors). Nevertheless, recent advances in machine learning paved the way for acquiring context from information-rich, loosely structured data such as audio or video signals. This paper describes a framework (CAMeL) enriching context-aware middlewares with machine learning capabilities. The framework is focused on acquiring contextual information from sensors providing loosely structured data without the need for developers of implementing dedicated application code or making use of external libraries. Nevertheless the general goal of context-aware middlewares is to make applications more dynamic and adaptive, and the proposed framework itself can be programmed for dynamically selecting sensors and machine learning algorithms on a contextual basis. We show with experiments and case studies how the CAMeL framework can (i) promote code reuse and reduce the complexity of context-aware applications by natively supporting machine learning capabilities and (ii) self-adapt using the acquired context allowing improvements in classification accuracy while reducing energy consumption on mobile platforms.

CAMeL: A Self-Adaptive Framework for Enriching Context-Aware Middlewares with Machine Learning Capabilities / Bicocchi, N.; Fontana, D.; Zambonelli, F.. - In: MOBILE INFORMATION SYSTEMS. - ISSN 1574-017X. - 2019:(2019), pp. 1-15. [10.1155/2019/1209850]

CAMeL: A Self-Adaptive Framework for Enriching Context-Aware Middlewares with Machine Learning Capabilities

Bicocchi N.
;
Fontana D.;Zambonelli F.
2019

Abstract

Context-aware middlewares support applications with context management. Current middlewares support both hardware and software sensors providing data in structured forms (e.g., temperature, wind, and smoke sensors). Nevertheless, recent advances in machine learning paved the way for acquiring context from information-rich, loosely structured data such as audio or video signals. This paper describes a framework (CAMeL) enriching context-aware middlewares with machine learning capabilities. The framework is focused on acquiring contextual information from sensors providing loosely structured data without the need for developers of implementing dedicated application code or making use of external libraries. Nevertheless the general goal of context-aware middlewares is to make applications more dynamic and adaptive, and the proposed framework itself can be programmed for dynamically selecting sensors and machine learning algorithms on a contextual basis. We show with experiments and case studies how the CAMeL framework can (i) promote code reuse and reduce the complexity of context-aware applications by natively supporting machine learning capabilities and (ii) self-adapt using the acquired context allowing improvements in classification accuracy while reducing energy consumption on mobile platforms.
2019
2019
1
15
CAMeL: A Self-Adaptive Framework for Enriching Context-Aware Middlewares with Machine Learning Capabilities / Bicocchi, N.; Fontana, D.; Zambonelli, F.. - In: MOBILE INFORMATION SYSTEMS. - ISSN 1574-017X. - 2019:(2019), pp. 1-15. [10.1155/2019/1209850]
Bicocchi, N.; Fontana, D.; Zambonelli, F.
File in questo prodotto:
File Dimensione Formato  
CAMeL-A-SelfAdaptive-Framework-for-Enriching-ContextAware-Middlewares-with-Machine-Learning-Capabilities2019Mobile-Information-SystemsOpen-Access.pdf

Open access

Tipologia: Versione dell'autore revisionata e accettata per la pubblicazione
Dimensione 2.68 MB
Formato Adobe PDF
2.68 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1175955
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 3
social impact