Ensemble methods are widely employed to improve generalization in machine learning. This has also prompted the adoption of ensemble learning for the knowledge graph embedding (KGE) models in performing link prediction. Typical approaches to this end train multiple models as part of the ensemble, and the diverse predictions are then averaged. However, this approach has some significant drawbacks. For instance, the computational overhead of training multiple models increases latency and memory overhead. In contrast, model merging approaches offer a promising alternative that does not require training multiple models. In this work, we introduce model merging, specifically weighted averaging, in KGE models. Herein, a running average of model parameters from a training epoch onward is maintained and used for predictions. To address this, we additionally propose an approach that selectively updates the running average of the ensemble model parameters only when the generalization performance improves on a validation dataset. We evaluate these two different weighted averaging approaches on link prediction tasks, comparing the state-of-the-art benchmark ensemble approach. Additionally, we evaluate the weighted averaging approach considering literal-augmented KGE models and multi-hop query answering tasks as well. The results demonstrate that the proposed weighted averaging approach consistently improves performance across diverse evaluation settings.
翻译:集成方法在机器学习中被广泛用于提升泛化性能,这也促使知识图谱嵌入模型在链接预测任务中采用集成学习。典型方法通过训练多个模型构成集成,并对多样化的预测结果进行平均。然而,该方法存在显著缺陷,例如训练多个模型带来的计算开销会增加延迟和内存负担。相比之下,模型融合方法提供了一种无需训练多个模型的有前景的替代方案。本研究在知识图谱嵌入模型中引入模型融合方法,特别是加权平均法。该方法通过维护训练周期内模型参数的滑动平均值并用于预测。为此,我们进一步提出一种选择性更新策略:仅当集成模型参数在验证数据集上的泛化性能提升时,才更新其滑动平均值。我们在链接预测任务中评估了这两种不同的加权平均方法,并与当前最先进的基准集成方法进行对比。此外,我们还评估了加权平均方法在字面增强知识图谱嵌入模型和多跳查询应答任务中的表现。实验结果表明,所提出的加权平均方法在不同评估场景中均能持续提升性能。