Large Language Models (LLMs) distinguish themselves by quickly delivering information and providing personalized responses through natural language prompts. However, they also infer user demographics, which can raise ethical concerns about bias and implicit personalization and create an echo chamber effect. This study aims to explore how inferred political views impact the responses of ChatGPT globally, regardless of the chat session. We also investigate how custom instruction and memory features alter responses in ChatGPT, considering the influence of political orientation. We developed three personas (two politically oriented and one neutral), each with four statements reflecting their viewpoints on DEI programs, abortion, gun rights, and vaccination. We convey the personas' remarks to ChatGPT using memory and custom instructions, allowing it to infer their political perspectives without directly stating them. We then ask eight questions to reveal differences in worldview among the personas and conduct a qualitative analysis of the responses. Our findings indicate that responses are aligned with the inferred political views of the personas, showing varied reasoning and vocabulary, even when discussing similar topics. We also find the inference happening with explicit custom instructions and the implicit memory feature in similar ways. Analyzing response similarities reveals that the closest matches occur between the democratic persona with custom instruction and the neutral persona, supporting the observation that ChatGPT's outputs lean left.
翻译:大语言模型(LLMs)以其通过自然语言提示快速提供信息并生成个性化响应的能力而著称。然而,它们也会推断用户的人口统计学特征,这可能引发关于偏见与隐性个性化的伦理担忧,并导致信息茧房效应。本研究旨在探讨推断出的政治观点如何影响ChatGPT在全球范围内的响应,且不受具体对话会话的影响。同时,我们考察了定制指令与记忆功能如何改变ChatGPT的响应,并考虑了政治倾向的影响。我们构建了三个虚拟角色(两个具有政治倾向,一个中立),每个角色包含四项陈述,分别反映其在多元化、公平与包容(DEI)项目、堕胎、持枪权及疫苗接种议题上的观点。我们通过记忆功能和定制指令向ChatGPT传达这些角色的言论,使其在不直接说明的情况下推断其政治立场。随后,我们提出八个问题以揭示不同角色在世界观上的差异,并对响应进行定性分析。研究结果表明,ChatGPT的响应与推断出的角色政治观点保持一致,即使在讨论相似话题时也展现出不同的推理逻辑和用词特征。我们还发现,通过显式定制指令和隐式记忆功能进行的推断具有相似的效果。对响应相似性的分析显示,使用定制指令的民主倾向角色与中立角色之间的匹配度最高,这支持了ChatGPT输出内容偏向左翼的观察。