We present M^3-Bench, the first benchmark for evaluating multimodal tool use under the Model Context Protocol. The benchmark targets realistic, multi-hop and multi-threaded workflows that require visual grounding and textual reasoning, cross-tool dependencies, and persistence of intermediate resources across steps. We introduce a similarity-driven alignment that serializes each tool call, embeds signatures with a sentence encoder, and performs similarity-bucketed Hungarian matching to obtain auditable one-to-one correspondences. On top of this alignment, we report interpretable metrics that decouple semantic fidelity from workflow consistency. The benchmark spans 28 servers with 231 tools, and provides standardized trajectories curated through an Executor & Judge pipeline with human verification; an auxiliary four large language models (LLMs) judge ensemble reports end-task Task Completion and information grounding. Evaluations of representative state-of-the-art Multimodal LLMs (MLLMs) reveal persistent gaps in multimodal MCP tool use, particularly in argument fidelity and structure consistency, underscoring the need for methods that jointly reason over images, text, and tool graphs. Our Benchmark's anonymous repository is at https://github.com/EtaYang10th/Open-M3-Bench
翻译:本文提出M^3-Bench,这是首个基于模型上下文协议评估多模态工具使用能力的基准测试。该基准针对现实场景中的多跳与多线程工作流,要求具备视觉定位与文本推理能力、跨工具依赖关系以及跨步骤中间资源的持久化。我们引入了一种相似度驱动的对齐方法:将每个工具调用序列化,通过句子编码器嵌入工具签名,并执行基于相似度分桶的匈牙利匹配算法以获得可审计的一一对应关系。在此对齐基础上,我们报告了可解释的评估指标,将语义保真度与工作流一致性进行解耦分析。该基准涵盖28个服务器共231个工具,并通过“执行器-评判器”流水线结合人工验证提供标准化轨迹;辅助的四大型语言模型评判集成系统报告最终任务完成度与信息基础性。对代表性前沿多模态大语言模型的评估显示,其在多模态MCP工具使用方面存在持续性差距,尤其在参数保真度与结构一致性方面,凸显了需要联合推理图像、文本及工具图的方法。本基准的匿名代码库位于 https://github.com/EtaYang10th/Open-M3-Bench