Although large language models (LLMs) hold significant promise in psychotherapy, their direct application in patient-facing scenarios raises ethical and safety concerns. Therefore, this work shifts towards developing an LLM as a supervisor to train real therapists. In addition to the privacy of clinical therapist training data, a fundamental contradiction complicates the training of therapeutic behaviors: clear feedback standards are necessary to ensure a controlled training system, yet there is no absolute "gold standard" for appropriate therapeutic behaviors in practice. In contrast, many common therapeutic mistakes are universal and identifiable, making them effective triggers for targeted feedback that can serve as clearer evidence. Motivated by this, we create a novel therapist-training paradigm: (1) guidelines for mistaken behaviors and targeted correction strategies are first established as standards; (2) a human-in-the-loop dialogue-feedback dataset is then constructed, where a mistake-prone agent intentionally makes standard mistakes during interviews naturally, and a supervisor agent locates and identifies mistakes and provides targeted feedback; (3) after fine-tuning on this dataset, the final supervisor model is provided for real therapist training. The detailed experimental results of automated, human and downstream assessments demonstrate that models fine-tuned on our dataset MATE, can provide high-quality feedback according to the clinical guideline, showing significant potential for the therapist training scenario.
翻译:尽管大型语言模型(LLMs)在心理治疗领域展现出巨大潜力,但其直接应用于面向患者的场景引发了伦理与安全担忧。因此,本研究转向开发LLM作为监督者,用于培训真实治疗师。除了临床治疗师训练数据的隐私问题外,一个根本性矛盾使得治疗行为训练复杂化:清晰的反馈标准对于确保可控的训练系统是必要的,但在实践中,适当的治疗行为并无绝对的“黄金标准”。相比之下,许多常见的治疗错误具有普遍性和可识别性,使其成为触发针对性反馈的有效信号,并能作为更明确的依据。基于此,我们创建了一种新颖的治疗师训练范式:(1)首先建立错误行为指南及针对性纠正策略作为标准;(2)随后构建人机交互的对话-反馈数据集,其中易犯错代理在访谈中自然且有意地犯下标准错误,监督代理则定位并识别错误并提供针对性反馈;(3)在此数据集上进行微调后,最终监督模型被用于真实治疗师培训。自动化评估、人工评估及下游任务评估的详细实验结果表明,基于我们构建的MATE数据集微调的模型,能够依据临床指南提供高质量反馈,展现出治疗师训练场景的显著应用潜力。