We investigate whether Large Language Models (LLMs) exhibit human-like cognitive patterns under four established frameworks from psychology: Thematic Apperception Test (TAT), Framing Bias, Moral Foundations Theory (MFT), and Cognitive Dissonance. We evaluated several proprietary and open-source models using structured prompts and automated scoring. Our findings reveal that these models often produce coherent narratives, show susceptibility to positive framing, exhibit moral judgments aligned with Liberty/Oppression concerns, and demonstrate self-contradictions tempered by extensive rationalization. Such behaviors mirror human cognitive tendencies yet are shaped by their training data and alignment methods. We discuss the implications for AI transparency, ethical deployment, and future work that bridges cognitive psychology and AI safety
翻译:本研究探讨大型语言模型(LLMs)是否在四种心理学经典框架下展现出类人认知模式:主题统觉测验(TAT)、框架效应偏差、道德基础理论(MFT)以及认知失调理论。我们通过结构化提示与自动化评分评估了多个专有及开源模型。研究结果表明,这些模型常能生成连贯叙事,对正向框架呈现易感性,其道德判断与自由/压迫维度关切一致,并表现出经大量合理化修饰的自我矛盾行为。此类行为虽映射出人类认知倾向,实则受其训练数据与对齐方法塑造。我们进一步讨论了该发现对人工智能透明度、伦理部署以及融合认知心理学与AI安全的前沿研究之意义。