Interactions with AI assistants are increasingly personalized to individual users. As AI personalization is dynamic and machine-learning-driven, we have limited understanding of how personalization affects interaction outcomes and user perceptions. We conducted a large-scale controlled experiment in which 1,000 participants interacted with AI assistants that took on certain personality traits and opinion stances. Our results show that participants consistently preferred to interact with models that shared their opinions. Participants also found opinion-aligned models more trustworthy, competent, warm, and persuasive, corroborating an AI-similarity-attraction hypothesis. In contrast, we observed no or only weak effects of AI personality alignment, with introvert models rated as less trustworthy and competent by introvert participants. These findings highlight opinion alignment as a central dimension of AI personalization and user preference, while underscoring the need for a more grounded discussion of the limits and risks of personalized AI.
翻译:人工智能助手的交互日益针对个体用户进行个性化定制。由于人工智能个性化是动态的且由机器学习驱动,我们对于个性化如何影响交互结果和用户感知的理解尚不充分。我们开展了一项大规模受控实验,其中1000名参与者与具有特定人格特质和观点立场的人工智能助手进行交互。结果显示,参与者始终倾向于与观点一致的模型进行交互。他们还认为观点一致的模型更值得信赖、更有能力、更亲切且更具说服力,这印证了人工智能相似性吸引假说。相比之下,我们观察到人工智能人格特质一致性没有影响或仅有微弱影响,其中内向型参与者对内敛型模型的信任度和能力评价较低。这些发现凸显了观点一致性作为人工智能个性化和用户偏好的核心维度,同时强调有必要对个性化人工智能的局限性和风险进行更深入的探讨。