Advanced Persistent Threats (APT) pose a major cybersecurity challenge due to their stealth, persistence, and adaptability. Traditional machine learning detectors struggle with class imbalance, high dimensional features, and scarce real world traces. They often lack transferability-performing well in the training domain but degrading in novel attack scenarios. We propose a hybrid transfer framework that integrates Transfer Learning, Explainable AI (XAI), contrastive learning, and Siamese networks to improve cross-domain generalization. An attention-based autoencoder supports knowledge transfer across domains, while Shapley Additive exPlanations (SHAP) select stable, informative features to reduce dimensionality and computational cost. A Siamese encoder trained with a contrastive objective aligns source and target representations, increasing anomaly separability and mitigating feature drift. We evaluate on real-world traces from the DARPA Transparent Computing (TC) program and augment with synthetic attack scenarios to test robustness. Across source to target transfers, the approach delivers improved detection scores with classical and deep baselines, demonstrating a scalable, explainable, and transferable solution for APT detection.
翻译:高级持续性威胁(APT)因其隐蔽性、持久性和适应性,构成了网络安全领域的重大挑战。传统机器学习检测器在处理类别不平衡、高维特征和真实世界痕迹稀缺等问题时面临困难,且通常缺乏可迁移性——在训练域表现良好,但在新攻击场景中性能下降。本文提出一种混合迁移框架,整合迁移学习、可解释人工智能(XAI)、对比学习和孪生网络,以提升跨域泛化能力。基于注意力的自编码器支持跨域知识迁移,而沙普利加性解释(SHAP)用于选择稳定且信息丰富的特征,以降低维度与计算成本。通过对比目标训练的孪生编码器对齐源域与目标域的表征,增强异常可分离性并缓解特征漂移。我们在DARPA透明计算(TC)项目的真实世界痕迹数据上进行评估,并通过合成攻击场景增强以测试鲁棒性。在源域到目标域的迁移实验中,该方法相较于经典与深度基线模型均实现了更高的检测分数,为APT检测提供了一种可扩展、可解释且可迁移的解决方案。