Neural abstractive summarization systems have achieved promising progress, thanks to the availability of large-scale datasets and models pre-trained with self-supervised methods. However, ensuring the factual consistency of the generated summaries for abstractive summarization systems is a challenge. We propose a post-editing corrector module to address this issue by identifying and correcting factual errors in generated summaries. The neural corrector model is pre-trained on artificial examples that are created by applying a series of heuristic transformations on reference summaries. These transformations are inspired by an error analysis of state-of-the-art summarization model outputs. Experimental results show that our model is able to correct factual errors in summaries generated by other neural summarization models and outperforms previous models on factual consistency evaluation on the CNN/DailyMail dataset. We also find that transferring from artificial error correction to downstream settings is still very challenging.
翻译:神经抽象总结系统已经取得了有希望的进展,因为有大规模数据集和经过自我监督方法事先训练的模型,但是,确保为抽象总结系统所生成的摘要具有实际一致性是一项挑战。我们提出了一个编辑后校正模块,通过识别和纠正生成的摘要中的事实错误来解决这一问题。神经校正模型预先培训了在参考摘要中应用一系列超常变换产生的人工示例。这些变异是由对最新合成模型输出结果的错误分析所启发的。实验结果表明,我们的模型能够纠正其他神经合成模型所生成的摘要中的事实错误,并超越CNN/DailyMail数据集对事实一致性评价的以往模型。我们还发现,从人为错误纠正到下游环境的转移仍然非常困难。