While pretrained language models ("LM") have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear. As a step towards a better understanding of their discourse modelling capabilities, we propose a sentence intrusion detection task. We examine the performance of a broad range of pretrained LMs on this detection task for English. Lacking a dataset for the task, we introduce INSteD, a novel intruder sentence detection dataset, containing 170,000+ documents constructed from English Wikipedia and CNN news articles. Our experiments show that pretrained LMs perform impressively in in-domain evaluation, but experience a substantial drop in the cross-domain setting, indicating limited generalisation capacity. Further results over a novel linguistic probe dataset show that there is substantial room for improvement, especially in the cross-domain setting.
翻译:虽然预先培训的语言模型(“LM”)在单体合成和语义任务方面取得了令人印象深刻的成绩,但其模拟话语和务实现象的能力却不那么清楚。作为更好地了解其话语建模能力的一个步骤,我们提议一项判决侵入探测任务。我们检查了广泛的事先培训语言模型在这项英语探测任务方面的表现。由于缺乏用于这项任务的数据集,我们引入了新颖的入侵罪检测数据集INSTED,其中包含170,000多份文件,由英国维基百科和CNN新闻文章制成。我们的实验显示,预先培训的LM在主体评估中表现得令人印象深刻,但跨界环境环境却出现大幅下降,这表明一般化能力有限。关于新的语言探测数据集的进一步结果显示,特别是在跨界环境中,有很大的改进空间。