Virtual try-on aims to synthesize a realistic image of a person wearing a target garment, but accurately modeling garment-body correspondence remains a persistent challenge, especially under pose and appearance variation. In this paper, we propose Voost - a unified and scalable framework that jointly learns virtual try-on and try-off with a single diffusion transformer. By modeling both tasks jointly, Voost enables each garment-person pair to supervise both directions and supports flexible conditioning over generation direction and garment category, enhancing garment-body relational reasoning without task-specific networks, auxiliary losses, or additional labels. In addition, we introduce two inference-time techniques: attention temperature scaling for robustness to resolution or mask variation, and self-corrective sampling that leverages bidirectional consistency between tasks. Extensive experiments demonstrate that Voost achieves state-of-the-art results on both try-on and try-off benchmarks, consistently outperforming strong baselines in alignment accuracy, visual fidelity, and generalization.
翻译:虚拟试穿旨在合成人物穿着目标服装的真实图像,但准确建模服装与人体之间的对应关系仍是一个持续存在的挑战,尤其在姿态与外观变化的情况下。本文提出Voost——一种统一且可扩展的框架,通过单一的扩散Transformer联合学习虚拟试穿与试脱任务。通过联合建模这两个任务,Voost使得每个服装-人物对能够同时监督两个方向,并支持对生成方向与服装类别的灵活条件控制,从而增强服装-人体关系推理能力,无需任务特定网络、辅助损失函数或额外标签。此外,我们引入了两种推理时技术:用于提升对分辨率或掩码变化鲁棒性的注意力温度缩放,以及利用任务间双向一致性的自校正采样方法。大量实验表明,Voost在试穿与试脱基准测试中均取得了最先进的结果,在配准精度、视觉保真度与泛化能力方面持续优于现有强基线模型。