We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety
翻译:本研究探讨大型语言模型(LLMs)在四种心理学经典框架下是否表现出类人认知模式:主题统觉测验(TAT)、框架效应偏差、道德基础理论(MFT)以及认知失调理论。我们通过结构化提示与自动化评分系统对多个专有及开源模型进行评估。研究结果表明,这些模型常能生成连贯叙事,对正向框架表现出敏感性,其道德判断倾向于自由/压迫维度,并在广泛合理化过程中呈现自我矛盾现象。此类行为既映射了人类认知倾向,又受其训练数据与对齐方法塑造。本文进一步讨论了该发现对人工智能透明度、伦理部署的启示,以及连接认知心理学与AI安全领域的未来研究方向。