Machine learning is increasingly adopted for a wide array of applications, due to its promising results and autonomous capabilities. However, recent research efforts have shown that, especially within the image processing field, these novel techniques are susceptible to adversarial perturbations. In this paper, we present an analysis that highlights and evaluates experimentally the fragility of network intrusion detection systems based on machine learning algorithms against adversarial attacks. In particular, our study involves a random forest classifier that utilizes network flows to distinguish between botnet and benign samples. Our results, derived from experiments performed on a public real dataset of labelled network flows, show that attackers can easily evade such defensive mechanisms by applying slight and targeted modifications to the network activity generated by their controlled bots. These findings pave the way for future techniques that aim to strengthen the performance of machine learning-based network intrusion detection systems.

Evading botnet detectors based on flows and random forest with adversarial samples / Apruzzese, G.; Colajanni, M.. - (2018), pp. 1-8. (Intervento presentato al convegno 17th IEEE International Symposium on Network Computing and Applications, NCA 2018 tenutosi a usa nel 2018) [10.1109/NCA.2018.8548327].

Evading botnet detectors based on flows and random forest with adversarial samples

Apruzzese G.;Colajanni M.
2018

Abstract

Machine learning is increasingly adopted for a wide array of applications, due to its promising results and autonomous capabilities. However, recent research efforts have shown that, especially within the image processing field, these novel techniques are susceptible to adversarial perturbations. In this paper, we present an analysis that highlights and evaluates experimentally the fragility of network intrusion detection systems based on machine learning algorithms against adversarial attacks. In particular, our study involves a random forest classifier that utilizes network flows to distinguish between botnet and benign samples. Our results, derived from experiments performed on a public real dataset of labelled network flows, show that attackers can easily evade such defensive mechanisms by applying slight and targeted modifications to the network activity generated by their controlled bots. These findings pave the way for future techniques that aim to strengthen the performance of machine learning-based network intrusion detection systems.
2018
17th IEEE International Symposium on Network Computing and Applications, NCA 2018
usa
2018
1
8
Apruzzese, G.; Colajanni, M.
Evading botnet detectors based on flows and random forest with adversarial samples / Apruzzese, G.; Colajanni, M.. - (2018), pp. 1-8. (Intervento presentato al convegno 17th IEEE International Symposium on Network Computing and Applications, NCA 2018 tenutosi a usa nel 2018) [10.1109/NCA.2018.8548327].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1200676
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 38
  • ???jsp.display-item.citation.isi??? 6
social impact