As large language models (LLMs) take on greater roles in high-stakes decisions, alignment with human values is essential. Reliance on proprietary APIs limits reproducibility and broad participation. We study whether local open-source ensemble debates can improve alignmentoriented reasoning. Across 150 debates spanning 15 scenarios and five ensemble configurations, ensembles outperform single-model baselines on a 7-point rubric (overall: 3.48 vs. 3.13), with the largest gains in reasoning depth (+19.4%) and argument quality (+34.1%). Improvements are strongest for truthfulness (+1.25 points) and human enhancement (+0.80). We provide code, prompts, and a debate data set, providing an accessible and reproducible foundation for ensemble-based alignment evaluation.
翻译:随着大语言模型(LLMs)在关键决策中承担日益重要的角色,与人类价值观的对齐至关重要。依赖专有API限制了研究的可复现性和广泛参与。本研究探讨本地开源集成辩论是否能提升对齐导向的推理能力。通过对15种场景、五种集成配置下的150场辩论进行分析,集成模型在7点评分量表上优于单模型基线(总体得分:3.48 vs. 3.13),其中推理深度提升19.4%,论证质量提升34.1%。在真实性(+1.25分)和人类增强性(+0.80分)维度改进最为显著。我们公开了代码、提示词和辩论数据集,为基于集成的对齐评估提供了可访问且可复现的基础框架。