Graph Neural Networks (GNNs) represent a promising solution for Machine Learning (ML) based Network Intrusion Detection Systems (NIDS), thanks to their ability to leverage both network flow features and topological patterns. While GNN classifiers demonstrate superior robustness against feature-based adversarial attacks compared to other ML detectors, they remain vulnerable to structural adversarial attacks, where an attacker perturbs the underlying network graph topology by injecting edges or inserting nodes. Such attacks pose a realistic and severe threat, undermining the reliability of GNN-based NIDS in practical deployments. While countermeasures have been proposed in the literature, they often rely on assumptions that are unrealistic in real-world cybersecurity scenarios. In this paper, we propose a defense framework based on adversarial training to strengthen GNN-based NIDS against structural attacks. We generate adversarial samples by strategically replacing the source and destination nodes in benign network flows, thereby efficiently mimicking edge injection attacks. We evaluate our approach on two widely used datasets (CTU-13 and TON-IoT) using E-GraphSAGE as the base GNN classifier. Experimental results show that our approach produces hardened detectors with superior detection performance on clean graphs and enhanced robustness against structural adversarial attacks.

Defending Network Intrusion Detection Systems Based on Graph Neural Networks Against Structural Adversarial Attacks / Galli, D.; Venturi, A.; Stabili, D.; Andreolini, M.; Marchetti, M.. - (2025), pp. 219-228. ( 23rd IEEE International Symposium on Network Computing and Applications, NCA 2025 prt 2025) [10.1109/NCA67271.2025.00043].

Defending Network Intrusion Detection Systems Based on Graph Neural Networks Against Structural Adversarial Attacks

Galli D.;Stabili D.;Andreolini M.;Marchetti M.
2025

Abstract

Graph Neural Networks (GNNs) represent a promising solution for Machine Learning (ML) based Network Intrusion Detection Systems (NIDS), thanks to their ability to leverage both network flow features and topological patterns. While GNN classifiers demonstrate superior robustness against feature-based adversarial attacks compared to other ML detectors, they remain vulnerable to structural adversarial attacks, where an attacker perturbs the underlying network graph topology by injecting edges or inserting nodes. Such attacks pose a realistic and severe threat, undermining the reliability of GNN-based NIDS in practical deployments. While countermeasures have been proposed in the literature, they often rely on assumptions that are unrealistic in real-world cybersecurity scenarios. In this paper, we propose a defense framework based on adversarial training to strengthen GNN-based NIDS against structural attacks. We generate adversarial samples by strategically replacing the source and destination nodes in benign network flows, thereby efficiently mimicking edge injection attacks. We evaluate our approach on two widely used datasets (CTU-13 and TON-IoT) using E-GraphSAGE as the base GNN classifier. Experimental results show that our approach produces hardened detectors with superior detection performance on clean graphs and enhanced robustness against structural adversarial attacks.
2025
23rd IEEE International Symposium on Network Computing and Applications, NCA 2025
prt
2025
219
228
Galli, D.; Venturi, A.; Stabili, D.; Andreolini, M.; Marchetti, M.
Defending Network Intrusion Detection Systems Based on Graph Neural Networks Against Structural Adversarial Attacks / Galli, D.; Venturi, A.; Stabili, D.; Andreolini, M.; Marchetti, M.. - (2025), pp. 219-228. ( 23rd IEEE International Symposium on Network Computing and Applications, NCA 2025 prt 2025) [10.1109/NCA67271.2025.00043].
File in questo prodotto:
File Dimensione Formato  
Defending_Network_Intrusion_Detection_Systems_Based_on_Graph_Neural_Networks_Against_Structural_Adversarial_Attacks.pdf

Accesso riservato

Tipologia: VOR - Versione pubblicata dall'editore
Dimensione 537 kB
Formato Adobe PDF
537 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1399048
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact