Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems

Zahra Jadidi, Shantanu Pal, K. Nithesh Nayak, Arawinkumaar Selvakkumar, Chih Chia Chang, Maedeh Beheshti, Alireza Jolfaei

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)


With the emergence of the Internet of Things (IoT) and Artificial Intelligence (AI) services and applications in the Cyber Physical Systems (CPS), the methods of protecting CPS against cyber threats is becoming more and more challenging. Various security solutions are implemented to protect CPS networks from cyber attacks. For instance, Machine Learning (ML) methods have been deployed to automate the process of anomaly detection in CPS environments. The core of ML is deep learning. However, it has been found that deep learning is vulnerable to adversarial attacks. Attackers can launch the attack by applying perturbations to input samples to mislead the model, which results in incorrect predictions and low accuracy. For example, the Fast Gradient Sign Method (FGSM) is a white-box attack that calculates gradient descent oppositely to maximize the loss and generates perturbations by adding the gradient to unpolluted data. In this study, we focus on the impact of adversarial attacks on deep learning-based anomaly detection in CPS networks and implement a mitigation approach against the attack by retraining models using adversarial samples. We use the Bot-IoT and Modbus IoT datasets to represent the two CPS networks. We train deep learning models and generate adversarial samples using these datasets. These datasets are captured from IoT and Industrial IoT (IIoT) networks. They both provide samples of normal and attack activities. The deep learning model trained with these datasets showed high accuracy in detecting attacks. An Artificial Neural Network (ANN) is adopted with one input layer, four intermediate layers, and one output layer. The output layer has two nodes representing the binary classification results. To generate adversarial samples for the experiment, we used a function called the 'fast-gradient-method' from the Cleverhans library. The experimental result demonstrates the influence of FGSM adversarial samples on the accuracy of the predictions and proves the effectiveness of using the retrained model to defend against adversarial attacks.

Original languageEnglish
Title of host publicationICCCN 2022 - 31st International Conference on Computer Communications and Networks
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages7
ISBN (Electronic)9781665497268
Publication statusPublished - 2022
Externally publishedYes
Event31st International Conference on Computer Communications and Networks, ICCCN 2022 - Virtual, Online, United States
Duration: 25 Jul 202227 Jul 2022

Publication series

NameProceedings - International Conference on Computer Communications and Networks, ICCCN
ISSN (Print)1095-2055


Conference31st International Conference on Computer Communications and Networks, ICCCN 2022
Country/TerritoryUnited States
CityVirtual, Online


  • Attacks
  • Cyber physical systems
  • Defence
  • Internet of Things
  • Machine learning
  • Security


Dive into the research topics of 'Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems'. Together they form a unique fingerprint.

Cite this