Large language models (LLMs) are increasingly being deployed as autonomous agents on behalf of institutions and individuals in economic, political, and social settings that involve negotiation. Yet this trend carries significant risks if their strategic behavior is not well understood. In this work, we revisit the NegotiationArena framework and run controlled simulation experiments on a diverse set of frontier LLMs across three multi turn bargaining games: Buyer Seller, Multi turn Ultimatum, and Resource Exchange. We ask whether improved general reasoning capabilities lead to rational, unbiased, and convergent negotiation strategies. Our results challenge this assumption. We find that models diverge into distinct, model specific strategic equilibria rather than converging to a unified optimal behavior. Moreover, strong numerical and semantic anchoring effects persist: initial offers are highly predictive of final agreements, and models consistently generate biased proposals by collapsing diverse internal valuations into rigid, generic price points. More concerningly, we observe dominance patterns in which some models systematically achieve higher payoffs than their counterparts. These findings underscore an urgent need to develop mechanisms to mitigate these issues before deploying such systems in real-world scenarios.
翻译:大语言模型(LLMs)正越来越多地被部署为自主代理,代表机构和个人参与涉及谈判的经济、政治和社会场景。然而,如果对其策略行为缺乏深入理解,这一趋势将带来重大风险。在本研究中,我们重新审视了NegotiationArena框架,并在三种多轮议价博弈(买家-卖家、多轮最后通牒和资源交换)中对一系列前沿大语言模型进行了受控模拟实验。我们探讨了提升的通用推理能力是否会导致理性、无偏见且收敛的谈判策略。我们的结果挑战了这一假设。研究发现,模型并未收敛到统一的最优行为,而是分化成各自特有的策略均衡。此外,强烈的数值和语义锚定效应持续存在:初始出价对最终协议具有高度预测性,且模型通过将多样化的内部估值压缩为僵化、通用的价格点,持续生成带有偏见的提案。更令人担忧的是,我们观察到某些模型系统性地获得比其对手更高收益的主导模式。这些发现强调,在现实场景中部署此类系统前,亟需开发机制以缓解这些问题。