Nowadays, Machine Learning (ML) solutions are widely adopted in modern malware and network intrusion detection systems. While these algorithms offer great performance, several researches demonstrate their vulnerability to adversarial attacks, which slightly modifies the input samples to compromise the correct behavior of the detector. Although this issue acquires extreme relevance in security-related contexts, the defenses are still immature. On the positive hand, cybersecurity poses additional challenges to the practicability of these attacks with respect to other domains. Previous studies focus exclusively on the degree of effectiveness of the proposals, but they do not discuss their actual feasibility. Based on this insight, in this paper we provide an overview of adversarial attacks and countermeasures for ML-based malware and network intrusion detection systems to assess their applicability in real world scenarios. In particular, we identify the constraints that need to be considered in the cybersecurity domain and discuss limitations of meaningful examples of previous proposals. Our work can guide practitioners to devise novel hardening solutions against more realistic threat models.
On the feasibility of adversarial machine learning in malware and network intrusion detection / Venturi, Andrea; Zanasi, Claudio. - (2021). (Intervento presentato al convegno 2021 IEEE 20th International Symposium on Network Computing and Applications (NCA) tenutosi a Boston nel 23 November 2022) [10.1109/NCA53618.2021.9685709].
On the feasibility of adversarial machine learning in malware and network intrusion detection
Andrea Venturi
;Claudio Zanasi
2021
Abstract
Nowadays, Machine Learning (ML) solutions are widely adopted in modern malware and network intrusion detection systems. While these algorithms offer great performance, several researches demonstrate their vulnerability to adversarial attacks, which slightly modifies the input samples to compromise the correct behavior of the detector. Although this issue acquires extreme relevance in security-related contexts, the defenses are still immature. On the positive hand, cybersecurity poses additional challenges to the practicability of these attacks with respect to other domains. Previous studies focus exclusively on the degree of effectiveness of the proposals, but they do not discuss their actual feasibility. Based on this insight, in this paper we provide an overview of adversarial attacks and countermeasures for ML-based malware and network intrusion detection systems to assess their applicability in real world scenarios. In particular, we identify the constraints that need to be considered in the cybersecurity domain and discuss limitations of meaningful examples of previous proposals. Our work can guide practitioners to devise novel hardening solutions against more realistic threat models.File | Dimensione | Formato | |
---|---|---|---|
On_the_feasibility_of_adversarial_machine_learning_in_malware_and_network_intrusion_detection.pdf
Accesso riservato
Tipologia:
VOR - Versione pubblicata dall'editore
Dimensione
315.08 kB
Formato
Adobe PDF
|
315.08 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris