Logo image
Assessing Evasion Attacks on Tree-Based Machine Learning Models: Supervised vs. Unsupervised Approaches
Conference paper   Peer reviewed

Assessing Evasion Attacks on Tree-Based Machine Learning Models: Supervised vs. Unsupervised Approaches

Carson Koball, Yong Wang, Varghese Vaidyan and John Hastings
Proceedings of IEEE International Symposium on Consumer Electronics, pp.1-6
IEEE
01/11/2025

Abstract

Adaptation models Classification algorithms Data models Decision trees Evasion Attacks Forests Isolation Forest Machine learning algorithms Perturbation methods Prediction algorithms Random forests Training Tree-Based Machine Learning
An evasion attack is a type of attack where an attacker maliciously modifies queries to a machine learning model to cause it to return incorrect or altered predictions. This paper presents and evaluates three evasion attack algorithms that target tree-based learners, including decision trees, random forests, adaptive boosting ensembles, and isolation forests. We evaluate these three attack algorithms using the RT-IoT2022 network traffic dataset. The evaluation results indicate that the presented evasion attacks are effective across all four target models. We also compare the three algorithms using perturbation measurements. Our study shows that the perturbations generated for the isolation forest model were significantly larger than those generated for the other supervised classifiers, despite the use of similar attack algorithms across the ensemble models. To the best of our knowledge, this is the first study to present an effective evasion attack against the isolation forest algorithm.
url
Article Landing PageView

Metrics

3 Record Views

Details

Logo image