Large Language Models (LLMs) are increasingly employed for simulating human behaviors across diverse domains. However, our position is that current LLM-based human simulations remain insufficiently reliable, as evidenced by significant discrepancies between their outcomes and authentic human actions. Our investigation begins with a systematic review of LLM-based human simulations in social, economic, policy, and psychological contexts, identifying their common frameworks, recent advances, and persistent limitations. This review reveals that such discrepancies primarily stem from inherent limitations of LLMs and flaws in simulation design, both of which are examined in detail. Building on these insights, we propose a systematic solution framework that emphasizes enriching data foundations, advancing LLM capabilities, and ensuring robust simulation design to enhance reliability. Finally, we introduce a structured algorithm that operationalizes the proposed framework, aiming to guide credible and human-aligned LLM-based simulations. To facilitate further research, we provide a curated list of related literature and resources at https://github.com/Persdre/awesome-llm-human-simulation.
翻译:大语言模型(LLMs)正日益广泛地应用于模拟不同领域的人类行为。然而,我们的立场是,当前基于LLM的人类模拟仍不够可靠,其输出结果与真实人类行为之间存在显著差异即为明证。我们的研究首先系统性地回顾了在社交、经济、政策和心理学背景下基于LLM的人类模拟,识别了其通用框架、最新进展以及持续存在的局限性。该综述揭示,此类差异主要源于LLMs的固有局限以及模拟设计中的缺陷,本文对这两方面均进行了详细探讨。基于这些见解,我们提出了一个系统性解决方案框架,强调通过丰富数据基础、提升LLM能力以及确保稳健的模拟设计来增强可靠性。最后,我们介绍了一种结构化算法,该算法将所提出的框架具体化,旨在指导构建可信且与人类行为对齐的基于LLM的模拟。为促进进一步研究,我们在 https://github.com/Persdre/awesome-llm-human-simulation 提供了一个精选的相关文献与资源列表。