We argue that language models (LMs) have strong potential as investigative tools for probing the distinction between possible and impossible natural languages and thus uncovering the inductive biases that support human language learning. We outline a phased research program in which LM architectures are iteratively refined to better discriminate between possible and impossible languages, supporting linking hypotheses to human cognition.
翻译:我们认为语言模型(LMs)具备作为探究工具的显著潜力,可用于探索可能自然语言与不可能自然语言之间的区分,从而揭示支持人类语言学习的归纳偏置。我们提出了一个分阶段的研究计划,通过迭代优化语言模型架构,使其能更好地区分可能语言与不可能语言,进而建立与人类认知的关联假设。