Abstract
Edge computing, as a relatively recent evolution of cloud computing architecture, is the newest way for enterprises to distribute computational power and lower repetitive referrals to central authorities. In the edge computing environment, Generative Models (GMs) have been found to be valuable and useful in machine learning tasks such as data augmentation and data pre-processing. Federated learning and distributed learning refer to training machine learning models in the edge computing network. However, federated learning and distributed learning also bring additional risks to GMs since all peers in the network have access to the model under training. In this article, we study the vulnerabilities of federated GMs to data-poisoning-based backdoor attacks via gradient uploading. We additionally enhance the attack to reduce the required poisonous data samples and cope with dynamic network environments. Last but not least, the attacks are formally proven to be stealthy and effective toward federated GMs. According to the experiments, neural backdoors can be successfully embedded by including merely poisonous samples in the local training dataset of an attacker.
Original language | English |
---|---|
Article number | 43 |
Pages (from-to) | 1-21 |
Number of pages | 21 |
Journal | ACM Transactions on Internet Technology |
Volume | 22 |
Issue number | 2 |
DOIs | |
Publication status | Published - May 2022 |
Externally published | Yes |
Keywords
- cloud computing
- Deep learning
- edge computing
- federated learning
- generative neural networks
- neural backdoor