The growing use of AI-generated responses in everyday tools raises concern about how subtle features such as supporting detail or tone of confidence may shape people's beliefs. To understand this, we conducted a pre-registered online experiment (N = 304) investigating how the detail and confidence of AI-generated responses influence belief change. We introduce an analysis framework with two targeted measures: belief switch and belief shift. These distinguish between users changing their initial stance after AI input and the extent to which they adjust their conviction toward or away from the AI's stance, thereby quantifying not only categorical changes but also more subtle, continuous adjustments in belief strength that indicate a reinforcement or weakening of existing beliefs. Using this framework, we find that detailed responses with medium confidence are associated with the largest overall belief changes. Highly confident messages tend to elicit belief shifts but induce fewer stance reversals. Our results also show that task type (fact-checking versus opinion evaluation), prior conviction, and perceived stance agreement further modulate the extent and direction of belief change. These findings illustrate how different properties of AI responses interact with user beliefs in subtle but potentially consequential ways and raise practical as well as ethical considerations for the design of LLM-powered systems.
翻译:AI生成回答在日常工具中的日益广泛应用引发了对诸如支持性细节或置信度语调等微妙特征如何影响人们信念的担忧。为探究此问题,我们开展了一项预注册在线实验(N = 304),研究AI生成回答的细节程度与置信度如何影响信念改变。我们引入了一个包含两个针对性指标的分析框架:信念转换与信念偏移。这些指标区分了用户在接收AI输入后改变初始立场的行为,以及他们向AI立场靠拢或背离的程度,从而不仅量化了分类变化,还捕捉到指示现有信念强化或弱化的、更细微且连续的信念强度调整。运用该框架,我们发现具有中等置信度的详细回答与最大程度的整体信念改变相关。高置信度的信息倾向于引发信念偏移,但导致较少的立场逆转。我们的结果还表明,任务类型(事实核查与观点评估)、先验信念强度以及感知立场一致性会进一步调节信念改变的程度与方向。这些发现揭示了AI回答的不同特性如何以微妙但可能产生重要后果的方式与用户信念相互作用,并为基于大语言模型的系统设计提出了实践与伦理层面的考量。