In this paper, we focus on addressing the challenges of detecting malicious attacks in networks by designing an advanced Explainable Intrusion Detection System (xIDS). The existing machine learning and deep learning approaches have invisible limitations, such as potential biases in predictions, a lack of interpretability, and the risk of overfitting to training data. These issues can create doubt about their usefulness, transparency, and a decrease in trust among stakeholders. To overcome these challenges, we propose an ensemble learning technique called "EnsembleGuard." This approach uses the predicted outputs of multiple models, including tree-based methods (LightGBM, GBM, Bagging, XGBoost, CatBoost) and deep learning models such as LSTM (long short-term memory) and GRU (gated recurrent unit), to maintain a balance and achieve trustworthy results. Our work is unique because it combines both tree-based and deep learning models to design an interpretable and explainable meta-model through model distillation. By considering the predictions of all individual models, our meta-model effectively addresses key challenges and ensures both explainable and reliable results. We evaluate our model using well-known datasets, including UNSW-NB15, NSL-KDD, and CIC-IDS-2017, to assess its reliability against various types of attacks. During analysis, we found that our model outperforms both tree-based models and other comparative approaches in different attack scenarios.
翻译:暂无翻译