The effectiveness of instruction-tuned Large Language Models (LLMs) is often limited in low-resource linguistic settings due to a lack of high-quality training data. We introduce LuxIT, a novel, monolingual instruction tuning dataset for Luxembourgish developed to mitigate this challenge. We synthesize the dataset from a corpus of native Luxembourgish texts, utilizing DeepSeek-R1-0528, chosen for its shown proficiency in Luxembourgish. Following generation, we apply a quality assurance process, employing an LLM-as-a-judge approach. To investigate the practical utility of the dataset, we fine-tune several smaller-scale LLMs on LuxIT. Subsequent benchmarking against their base models on Luxembourgish language proficiency examinations, however, yields mixed results, with performance varying significantly across different models. LuxIT represents a critical contribution to Luxembourgish natural language processing and offers a replicable monolingual methodology, though our findings highlight the need for further research to optimize its application.
翻译:在低资源语言环境下,指令微调大型语言模型(LLMs)的有效性常因缺乏高质量训练数据而受限。为应对这一挑战,我们提出了LuxIT,一个专为卢森堡语设计的新型单语指令微调数据集。该数据集通过从原生卢森堡语文本语料库中合成构建,并选用在卢森堡语处理中表现优异的DeepSeek-R1-0528模型进行生成。生成后,我们采用LLM作为评判者的方法实施了质量保证流程。为探究该数据集的实际效用,我们在LuxIT上对多个小规模LLMs进行了微调。然而,随后在卢森堡语能力测试中与基础模型的基准对比结果呈现复杂性,不同模型间的性能差异显著。LuxIT对卢森堡语自然语言处理领域作出了重要贡献,并提供了一种可复现的单语方法,但我们的研究结果也表明,仍需进一步探索以优化其应用效果。