Both humans and machine learning models learn from experience, particularly in safety- and reliability-critical domains. While psychology seeks to understand human cognition, the field of Explainable AI (XAI) develops methods to interpret machine learning models. This study bridges these domains by applying computational tools from XAI to analyze human learning. We modeled human behavior during a complex real-world task -- tuning a particle accelerator -- by constructing graphs of operator subtasks. Applying techniques such as community detection and hierarchical clustering to archival operator data, we reveal how operators decompose the problem into simpler components and how these problem-solving structures evolve with expertise. Our findings illuminate how humans develop efficient strategies in the absence of globally optimal solutions, and demonstrate the utility of XAI-based methods for quantitatively studying human cognition.
翻译:人类与机器学习模型均从经验中学习,尤其在安全和可靠性关键领域。心理学旨在理解人类认知,而可解释人工智能(XAI)领域则致力于开发解释机器学习模型的方法。本研究通过应用XAI的计算工具分析人类学习,架起了这两个领域的桥梁。我们通过构建操作员子任务图,模拟了人类在复杂现实任务——粒子加速器调谐——中的行为。通过对归档操作员数据应用社区检测和层次聚类等技术,揭示了操作员如何将问题分解为更简单的组成部分,以及这些解决问题的结构如何随专业知识演化。我们的发现阐明了人类在缺乏全局最优解时如何发展高效策略,并证明了基于XAI的方法在定量研究人类认知中的实用性。