Abstract
Malicious URLs present significant threats to businesses, such as transportation and banking, causing disruptions in business operations. It is essential to identify these URLs; however, existing Machine Learning models are vulnerable to backdoor attacks. These attacks involve manipulating a small portion of the training data labels, such as Label Flipping, which can lead to misclassification. Therefore, it is crucial to incorporate defense mechanisms into machine-learning models to protect against such attacks. The focus of this study is on backdoor attacks in the context of URL detection using ensemble trees. By illuminating the motivations behind such attacks, highlighting the roles of attackers, and emphasizing the critical importance of effective defense strategies, this paper contributes to the ongoing efforts to fortify machine-learning models against adversarial threats within the machine-learning domain in network security. We propose an innovative alarm system that detects the presence of poisoned labels and a defense mechanism designed to uncover the original class labels with the aim of mitigating backdoor attacks on ensemble tree classifiers. We conducted a case study using the Alexa and Phishing Site URL datasets and showed that label-flipping attacks can be addressed using our proposed defense mechanism. Our experimental results prove that the Label Flipping attack achieved an Attack Success Rate between 50-65% within 2-5%, and the innovative defense method successfully detected poisoned labels with an accuracy of up to 100%.
Original language | English |
---|---|
Pages (from-to) | 6875-6884 |
Number of pages | 10 |
Journal | IEEE Transactions on Network and Service Management |
Volume | 21 |
Issue number | 6 |
Early online date | 26 Aug 2024 |
DOIs | |
Publication status | Published - 2024 |
Keywords
- Accuracy
- Adversarial machine learning
- AI for Security
- backdoor attacks
- Classification tree analysis
- corrupted training sets
- cybersecurity
- Data models
- label-flipping attacks
- poisoning attacks
- Radio frequency
- Security for AI
- Toxicology
- Training
- Uniform resource locators
- security for AI
- AI for security