The integration of large language models (LLMs) into medical practice offers transformative potential, yet their real-world clinical applicability remains constrained by critical alignment issues: (1) a misalignment between static evaluation benchmarks and the dynamic cognitive demands of clinical practice, (2) challenges in adapting to continuously evolving, multi-source medical standards, and (3) the limited capacity of conventional reward models to reflect nuanced, multi-dimensional medical quality criteria. To overcome these limitations, we introduce MR-RML (Multidimensional Rubric-oriented Reward Model Learning) with GPRC (Geometric Projection Reference Constraints)-a novel alignment framework that structured medical standards into a multi-perspective matrix to guide both data generation and model optimization. Our approach introduces three key innovations: (1) a medical standard system that embeds domain-specific guidelines throughout the training pipeline; (2) an independent multi-dimensional reward model that decomposes evaluation criteria, transitioning from rule-based or LLM-based scoring to internalized reward modeling for better evaluation performance; and (3) geometric projection reference constraints that translate clinical cognitive logic into mathematical regularization, aligning scoring gradients with clinical reasoning and facilitating training with synthetically generated data. Extensive evaluations on the authoritative medical benchmark Healthbench demonstrate that our method significantly boosts the performance of the base Qwen-32B model, with improvements of 45% on the full subset and 85% on the hard subset. It achieves state-of-the-art results among open-source LLMs, scoring 62.7 (full) and 44.7 (hard), while also surpassing the majority of closed-source models.
翻译:暂无翻译