Reasoning over procedural sequences, where the order of steps directly impacts outcomes, is a critical capability for large language models (LLMs). In this work, we study the task of reconstructing globally ordered sequences from shuffled procedural steps, using a curated dataset of food recipes, a domain where correct sequencing is essential for task success. We evaluate several LLMs under zero-shot and few-shot settings and present a comprehensive evaluation framework that adapts established metrics from ranking and sequence alignment. These include Kendall's Tau, Normalized Longest Common Subsequence (NLCS), and Normalized Edit Distance (NED), which capture complementary aspects of ordering quality. Our analysis shows that model performance declines with increasing sequence length, reflecting the added complexity of longer procedures. We also find that greater step displacement in the input, corresponding to more severe shuffling, leads to further degradation. These findings highlight the limitations of current LLMs in procedural reasoning, especially with longer and more disordered inputs.
翻译:对程序化序列进行推理——其中步骤的顺序直接影响结果——是大语言模型(LLMs)的一项关键能力。本研究通过使用精心构建的食谱数据集(该领域步骤的正确排序对任务成功至关重要),探讨了从打乱的程序步骤中重建全局有序序列的任务。我们在零样本和少样本设置下评估了多种LLM,并提出一个综合评估框架,该框架采用了来自排序和序列对齐领域的成熟指标。这些指标包括肯德尔τ系数、归一化最长公共子序列(NLCS)和归一化编辑距离(NED),它们从不同角度捕捉了排序质量。我们的分析表明,模型性能随着序列长度的增加而下降,这反映了较长程序带来的额外复杂性。我们还发现,输入中步骤位移越大(即打乱程度越严重),会导致性能进一步恶化。这些发现突显了当前LLM在程序推理方面的局限性,尤其是在处理更长、更无序的输入时。