Large Language Models (LLMs) have enabled the emergence of LLM agents, systems capable of pursuing under-specified goals and adapting after deployment. Evaluating such agents is challenging because their behavior is open ended, probabilistic, and shaped by system-level interactions over time. Traditional evaluation methods, built around fixed benchmarks and static test suites, fail to capture emergent behaviors or support continuous adaptation across the lifecycle. To ground a more systematic approach, we conduct a multivocal literature review (MLR) synthesizing academic and industrial evaluation practices. The findings directly inform two empirically derived artifacts: a process model and a reference architecture that embed evaluation as a continuous, governing function rather than a terminal checkpoint. Together they constitute the evaluation-driven development and operations (EDDOps) approach, which unifies offline (development-time) and online (runtime) evaluation within a closed feedback loop. By making evaluation evidence drive both runtime adaptation and governed redevelopment, EDDOps supports safer, more traceable evolution of LLM agents aligned with changing objectives, user needs, and governance constraints.
翻译:大型语言模型(LLMs)推动了LLM智能体的出现,这类系统能够追求未明确指定的目标并在部署后持续适应。评估此类智能体具有挑战性,因为其行为具有开放性、概率性,并随时间受系统级交互影响。传统的评估方法围绕固定基准和静态测试套件构建,无法捕捉涌现行为或支持全生命周期的持续适应。为建立更系统化的方法,我们进行了多源文献综述(MLR),综合了学术界与工业界的评估实践。研究结果直接启发了两项基于经验推导的成果:一个过程模型和一个参考架构,它们将评估嵌入为持续的治理功能,而非终端检查点。二者共同构成了评估驱动开发与运维(EDDOps)方法,该方法在闭环反馈回路中统一了离线(开发阶段)与在线(运行时)评估。通过使评估证据驱动运行时适应和受控的再开发,EDDOps支持LLM智能体更安全、可追溯的演进,以符合变化的目标、用户需求和治理约束。