On the Neural Backdoor of Federated Generative Models in Edge Computing

Derui Wang, Sheng Wen, Alireza Jolfaei, Mohammad Sayad Haghighi, Surya Nepal, Yang Xiang

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely poisonous samples in the local training dataset of an attacker.

Original languageEnglish
Article number43
Pages (from-to)1-21
Number of pages21
JournalACM Transactions on Internet Technology
Volume22
Issue number2
DOIs
Publication statusPublished - May 2022
Externally publishedYes

Keywords

  • cloud computing
  • Deep learning
  • edge computing
  • federated learning
  • generative neural networks
  • neural backdoor

Fingerprint

Dive into the research topics of 'On the Neural Backdoor of Federated Generative Models in Edge Computing'. Together they form a unique fingerprint.

Cite this