The automation of workflows in advanced microscopy is a key goal where foundation models like Language Models (LLMs) and Vision-Language Models (VLMs) show great potential. However, adapting these general-purpose models for specialized scientific tasks is critical, and the optimal domain adaptation strategy is often unclear. To address this, we introduce PtychoBench, a new multi-modal, multi-task benchmark for ptychographic analysis. Using this benchmark, we systematically compare two specialization strategies: Supervised Fine-Tuning (SFT) and In-Context Learning (ICL). We evaluate these strategies on a visual artifact detection task with VLMs and a textual parameter recommendation task with LLMs in a data-scarce regime. Our findings reveal that the optimal specialization pathway is task-dependent. For the visual task, SFT and ICL are highly complementary, with a fine-tuned model guided by context-aware examples achieving the highest mean performance (Micro-F1 of 0.728). Conversely, for the textual task, ICL on a large base model is the superior strategy, reaching a peak Micro-F1 of 0.847 and outperforming a powerful "super-expert" SFT model (0-shot Micro-F1 of 0.839). We also confirm the superiority of context-aware prompting and identify a consistent contextual interference phenomenon in fine-tuned models. These results, benchmarked against strong baselines including GPT-4o and a DINOv3-based classifier, offer key observations for AI in science: the optimal specialization path in our benchmark is dependent on the task modality, offering a clear framework for developing more effective science-based agentic systems.
翻译:先进显微镜工作流程的自动化是一个关键目标,其中语言模型(LLMs)和视觉语言模型(VLMs)等基础模型展现出巨大潜力。然而,将这些通用模型适配于专业科学任务至关重要,而最优的领域适配策略往往并不明确。为此,我们引入了PtychoBench,一个用于叠层成像分析的新型多模态、多任务基准。利用该基准,我们系统比较了两种专业化策略:监督微调(SFT)和上下文学习(ICL)。我们在数据稀缺条件下,针对视觉伪影检测任务(使用VLMs)和文本参数推荐任务(使用LLMs)评估了这些策略。我们的研究结果表明,最优的专业化路径取决于具体任务。对于视觉任务,SFT和ICL具有高度互补性,一个由上下文感知示例引导的微调模型实现了最高的平均性能(Micro-F1为0.728)。相反,对于文本任务,在大型基础模型上应用ICL是更优策略,其峰值Micro-F1达到0.847,并优于一个强大的“超级专家”SFT模型(0-shot Micro-F1为0.839)。我们还证实了上下文感知提示的优越性,并识别出微调模型中存在一致的上下文干扰现象。这些结果,在与包括GPT-4o和基于DINOv3的分类器在内的强大基线模型对比后得出,为科学领域的人工智能提供了关键观察:在我们的基准测试中,最优的专业化路径取决于任务模态,这为开发更有效的基于科学的智能体系统提供了一个清晰的框架。