Large Language Model (LLM)-based agents are increasingly deployed in multi-agent scenarios where coordination is crucial but not always assured. Research shows that the way strategic scenarios are framed linguistically can affect cooperation. This paper explores whether allowing agents to communicate amplifies these language-driven effects. Leveraging FAIRGAME, we simulate one-shot and repeated games across different languages and models, both with and without communication. Our experiments, conducted with two advanced LLMs-GPT-4o and Llama 4 Maverick-reveal that communication significantly influences agent behavior, though its impact varies by language, personality, and game structure. These findings underscore the dual role of communication in fostering coordination and reinforcing biases.
翻译:基于大语言模型(LLM)的智能体越来越多地部署于多智能体场景中,这些场景中协调至关重要但并非总能得到保证。研究表明,策略性场景的语言表述方式会影响合作行为。本文探讨了允许智能体进行沟通是否会放大这些语言驱动效应。利用FAIRGAME平台,我们模拟了不同语言和模型下的单次与重复博弈,涵盖有沟通和无沟通两种条件。实验采用两种先进LLM——GPT-4o与Llama 4 Maverick——结果表明,沟通显著影响智能体行为,但其效果因语言、个性特征及博弈结构而异。这些发现揭示了沟通在促进协调与强化偏见方面的双重作用。