Recent text-to-image (T2I) models have made remarkable progress in generating visually realistic and semantically coherent images. However, they still suffer from randomness and inconsistency with the given prompts, particularly when textual descriptions are vague or underspecified. Existing approaches, such as prompt rewriting, best-of-N sampling, and self-refinement, can mitigate these issues but usually require additional modules and operate independently, hindering test-time scaling efficiency and increasing computational overhead. In this paper, we introduce ImAgent, a training-free unified multimodal agent that integrates reasoning, generation, and self-evaluation within a single framework for efficient test-time scaling. Guided by a policy controller, multiple generation actions dynamically interact and self-organize to enhance image fidelity and semantic alignment without relying on external models. Extensive experiments on image generation and editing tasks demonstrate that ImAgent consistently improves over the backbone and even surpasses other strong baselines where the backbone model fails, highlighting the potential of unified multimodal agents for adaptive and efficient image generation under test-time scaling.
翻译:近年来,文本到图像(T2I)模型在生成视觉上逼真且语义连贯的图像方面取得了显著进展。然而,它们仍然存在随机性和与给定提示不一致的问题,尤其是在文本描述模糊或未充分指定时。现有方法,如提示重写、N选一采样和自优化,可以缓解这些问题,但通常需要额外的模块且独立运行,这阻碍了测试时扩展效率并增加了计算开销。本文提出了ImAgent,一种无需训练的统一多模态智能体,它将推理、生成和自评估集成在单一框架内,以实现高效的测试时扩展。在策略控制器的引导下,多个生成动作动态交互并自组织,以增强图像保真度和语义对齐,而无需依赖外部模型。在图像生成和编辑任务上的大量实验表明,ImAgent在骨干模型基础上持续改进,甚至在骨干模型失败的情况下超越了其他强基线,凸显了统一多模态智能体在测试时扩展下实现自适应高效图像生成的潜力。