We present UniGen-1.5, a unified multimodal large language model (MLLM) for advanced image understanding, generation and editing. Building upon UniGen, we comprehensively enhance the model architecture and training pipeline to strengthen the image understanding and generation capabilities while unlocking strong image editing ability. Especially, we propose a unified Reinforcement Learning (RL) strategy that improves both image generation and image editing jointly via shared reward models. To further enhance image editing performance, we propose a light Edit Instruction Alignment stage that significantly improves the editing instruction comprehension that is essential for the success of the RL training. Experimental results show that UniGen-1.5 demonstrates competitive understanding and generation performance. Specifically, UniGen-1.5 achieves 0.89 and 4.31 overall scores on GenEval and ImgEdit that surpass the state-of-the-art models such as BAGEL and reaching performance comparable to proprietary models such as GPT-Image-1.
翻译:本文提出UniGen-1.5,一个用于高级图像理解、生成与编辑的统一多模态大语言模型(MLLM)。在UniGen基础上,我们全面增强了模型架构与训练流程,以强化图像理解与生成能力,同时解锁了强大的图像编辑功能。特别地,我们提出了一种统一的强化学习(RL)策略,通过共享奖励模型联合提升图像生成与编辑性能。为进一步增强图像编辑表现,我们提出了轻量级的编辑指令对齐阶段,显著提升了对RL训练成功至关重要的编辑指令理解能力。实验结果表明,UniGen-1.5展现出具有竞争力的理解与生成性能。具体而言,UniGen-1.5在GenEval和ImgEdit基准上分别获得0.89和4.31的综合得分,超越了BAGEL等最先进模型,达到与GPT-Image-1等专有模型相当的性能水平。