Effective human-AI collaboration requires humans to accurately gauge AI capabilities and calibrate their trust accordingly. Humans often have context-dependent private information, referred to as Unique Human Knowledge (UHK), that is crucial for deciding whether to accept or override AI's recommendations. We examine how displaying AI reasoning affects trust and UHK utilization through a pre-registered, incentive-compatible experiment (N = 752). We find that revealing AI reasoning, whether brief or extensive, acts as a powerful persuasive heuristic that significantly increases trust and agreement with AI recommendations. Rather than helping participants appropriately calibrate their trust, this transparency induces over-trust that crowds out UHK utilization. Our results highlight the need for careful consideration when revealing AI reasoning and call for better information design in human-AI collaboration systems.
翻译:有效的人机协作要求人类准确评估人工智能的能力并相应调整信任程度。人类通常拥有情境依赖的私有信息,即独特人类知识(UHK),这对于决定是否接受或推翻人工智能的建议至关重要。我们通过一项预先注册、激励相容的实验(N = 752)研究了展示人工智能推理过程如何影响信任与UHK的利用。研究发现,无论简要还是详尽地揭示人工智能推理,均作为一种强有力的说服性启发式手段,显著提升了参与者对人工智能建议的信任与认同。这种透明度并未帮助参与者适当校准信任,反而引发了过度信任,从而抑制了UHK的运用。我们的研究结果强调,在揭示人工智能推理时需要审慎考量,并呼吁在人机协作系统中进行更优的信息设计。