Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
翻译:拼写纠错对于低资源语言(LRLs)而言仍是一个具有挑战性的问题。尽管预训练语言模型(PLMs)已被应用于拼写纠错,但其使用仍局限于少数语言,且缺乏对不同PLMs之间的系统比较。本文首次针对包含低资源语言在内的PLMs在拼写纠错任务上的有效性进行了实证研究。研究发现,当微调数据集规模较大时,大语言模型(LLMs)的表现优于其对应模型(编码器基与编码器-解码器结构)。这一结论即使在LLM未经预训练的语言中依然成立。我们发布了LMSpell——一个跨PLMs的易用拼写纠错工具包,其中包含可补偿LLMs幻觉问题的评估函数。此外,我们通过僧伽罗语的案例研究,揭示了低资源语言在拼写纠错方面面临的困境。