We present a new approach for distilling boosted trees into decision trees, in the objective of generating an ML model offering an acceptable compromise in terms of predictive performance and interpretability. We explain how the correction approach called rectification can be used to implement such a distillation process. We show empirically that this approach provides interesting results, in comparison with an approach to distillation achieved by retraining the model.
翻译:我们提出了一种将增强树蒸馏为决策树的新方法,旨在生成一种在预测性能与可解释性之间达到可接受折衷的机器学习模型。我们阐释了称为校正的修正方法如何用于实现此类蒸馏过程。通过实验对比,我们证明相较于通过模型重训练实现的蒸馏方法,本方法能够获得具有研究价值的结果。