Generating realistic synthetic microscopy images is critical for training deep learning models in label-scarce environments, such as cell counting with many cells per image. However, traditional domain adaptation methods often struggle to bridge the domain gap when synthetic images lack the complex textures and visual patterns of real samples. In this work, we adapt the Inversion-Based Style Transfer (InST) framework originally designed for artistic style transfer to biomedical microscopy images. Our method combines latent-space Adaptive Instance Normalization with stochastic inversion in a diffusion model to transfer the style from real fluorescence microscopy images to synthetic ones, while weakly preserving content structure. We evaluate the effectiveness of our InST-based synthetic dataset for downstream cell counting by pre-training and fine-tuning EfficientNet-B0 models on various data sources, including real data, hard-coded synthetic data, and the public Cell200-s dataset. Models trained with our InST-synthesized images achieve up to 37\% lower Mean Absolute Error (MAE) compared to models trained on hard-coded synthetic data, and a 52\% reduction in MAE compared to models trained on Cell200-s (from 53.70 to 25.95 MAE). Notably, our approach also outperforms models trained on real data alone (25.95 vs. 27.74 MAE). Further improvements are achieved when combining InST-synthesized data with lightweight domain adaptation techniques such as DACS with CutMix. These findings demonstrate that InST-based style transfer most effectively reduces the domain gap between synthetic and real microscopy data. Our approach offers a scalable path for enhancing cell counting performance while minimizing manual labeling effort. The source code and resources are publicly available at: https://github.com/MohammadDehghan/InST-Microscopy.
翻译:生成逼真的合成显微图像对于在标签稀缺环境中训练深度学习模型至关重要,例如每张图像包含大量细胞的细胞计数任务。然而,当合成图像缺乏真实样本的复杂纹理和视觉模式时,传统的域自适应方法往往难以弥合域差异。在本研究中,我们将最初为艺术风格迁移设计的基于反转的风格迁移框架应用于生物医学显微图像。我们的方法结合了潜在空间自适应实例归一化与扩散模型中的随机反转,将真实荧光显微图像的风格迁移至合成图像,同时弱化地保留内容结构。我们通过在不同数据源上预训练和微调EfficientNet-B0模型来评估基于InST的合成数据集在下游细胞计数任务中的有效性,数据源包括真实数据、硬编码合成数据以及公开的Cell200-s数据集。使用我们的InST合成图像训练的模型相比硬编码合成数据训练的模型实现了高达37%的均方绝对误差降低,相比在Cell200-s上训练的模型MAE降低了52%(从53.70降至25.95)。值得注意的是,我们的方法也优于仅使用真实数据训练的模型(25.95对比27.74 MAE)。当将InST合成数据与轻量级域自适应技术(如结合CutMix的DACS)结合时,性能得到进一步提升。这些结果表明,基于InST的风格迁移能最有效地减少合成与真实显微数据之间的域差异。我们的方法为提升细胞计数性能同时最小化人工标注工作量提供了一条可扩展的路径。源代码及相关资源已公开于:https://github.com/MohammadDehghan/InST-Microscopy。