Generative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between-subjects experiment with university students (N=217) who discussed an autonomous-vehicle dilemma in triads under three conditions: human-only control, supportive AI teammate, or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human-only groups, indicating more stable topical focus. Post-discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.
翻译:生成式人工智能日益被定位为协作学习中的对等伙伴,但其对伦理审议的影响尚不明确。我们报告了一项针对大学生(N=217)的组间实验,参与者在三种条件下以三人小组形式讨论自动驾驶汽车困境:纯人类对照组、支持性AI队友组或对立性AI队友组。通过运用道德基础词典、基于增强性知识建构框架的论证编码、结合BERTopic与动态时间规整的语义轨迹建模,以及认知网络分析,我们追踪了AI人格如何重塑道德话语。相较于对照组,支持性AI增加了基于事实/有条件的主张,围绕关怀/公平维度巩固了整合性推理;而对立性AI适度拓宽了道德框架并维持了价值多元性。两种AI条件均较纯人类组减少了主题漂移,表明话题焦点更为稳定。讨论后论证的复杂性仅与道德框架和推理质量呈弱相关,且最终道德决策的变化主要受参与者初始立场驱动而非实验条件。总体而言,AI队友更多改变了道德选择的过程、道德框架的分布与关联以及论证质量,而非道德选择的结果,这凸显了生成式AI智能体作为协作学习队友在激发反思性、多元性道德推理方面的潜力。