Conversations transform individual knowledge into collective insight, enabling collaborators to solve problems more accurately than they could alone. Whether dialogues among large language models (LLMs) can replicate the synergistic gains observed in human discussion remains unclear. We systematically compared four interaction settings: LLM-LLM pairs, LLM trios, human trios, and human-LLM pairs, using validated medical multiple-choice questions. Agents answered individually, engaged in open-ended discussion, then re-answered, allowing us to quantify conversational gains. Interactions that included humans consistently yielded synergy (post-discussion accuracy increased for both stronger and weaker participants), whereas purely LLM groups did not improve and often declined. To explain and prospectively predict when unstructured dialogue helps, we introduce an agent-agnostic confident-knowledge framework that models each participant by performance (accuracy) and confidence. This framework quantifies confident-knowledge diversity, the degree to which one agent tends to be correct when another is uncertain, and yields a conservative upper bound on gains achievable via confidence-informed decisions, which we term Potential Conversation Synergy. Across humans, LLMs, and mixed teams, this metric prospectively predicts observed conversational improvements: when confident-knowledge diversity is low (as in LLM-only groups), discussion doesn't improve performance; when it is present (as in human or human-LLM groups), free-form dialogue reliably lifts accuracy. These findings propose a new concept and method for AI collaboration: quantifying confident-knowledge diversity to prospectively predict conversational gains and guide team selection and interaction design in both multi-agent and human-AI settings.
翻译:暂无翻译