Multimodal autoregressive (AR) models, based on next-token prediction and transformer architecture, have demonstrated remarkable capabilities in various multimodal tasks including text-to-image (T2I) generation. Despite their strong performance in general T2I tasks, our research reveals that these models initially struggle with subject-driven image generation compared to dominant diffusion models. To address this limitation, we introduce Proxy-Tuning, leveraging diffusion models to enhance AR models' capabilities in subject-specific image generation. Our method reveals a striking weak-to-strong phenomenon: fine-tuned AR models consistently outperform their diffusion model supervisors in both subject fidelity and prompt adherence. We analyze this performance shift and identify scenarios where AR models excel, particularly in multi-subject compositions and contextual understanding. This work not only demonstrates impressive results in subject-driven AR image generation, but also unveils the potential of weak-to-strong generalization in the image generation domain, contributing to a deeper understanding of different architectures' strengths and limitations.
翻译:基于下一令牌预测和Transformer架构的多模态自回归(AR)模型已在包括文本到图像(T2I)生成在内的多种多模态任务中展现出卓越能力。尽管这些模型在通用T2I任务中表现强劲,但我们的研究发现,相较于主流的扩散模型,它们最初在主题驱动图像生成方面存在困难。为克服这一局限,我们提出了代理调优方法,利用扩散模型增强AR模型在主题特定图像生成中的能力。我们的方法揭示了一个显著的弱到强现象:经过微调的AR模型在主题保真度和提示遵循度上均持续超越其扩散模型监督者。我们分析了这一性能转变,并识别出AR模型表现尤为突出的场景,特别是在多主题组合和上下文理解方面。这项工作不仅展示了主题驱动AR图像生成的显著成果,还揭示了图像生成领域中弱到强泛化的潜力,有助于更深入地理解不同架构的优势与局限。