We introduce Material Coating, a novel image editing task that simulates applying a thin material layer onto an object while preserving its underlying coarse and fine geometry. Material coating is fundamentally different from existing "material transfer" methods, which are designed to replace an object's intrinsic material, often overwriting fine details. To address this new task, we construct a large-scale synthetic dataset (110K images) of 3D objects with varied, physically-based coatings, named DataCoat110K. We then propose CoatFusion, a novel architecture that enables this task by conditioning a diffusion model on both a 2D albedo texture and granular, PBR-style parametric controls, including roughness, metalness, transmission, and a key thickness parameter. Experiments and user studies show CoatFusion produces realistic, controllable coatings and significantly outperforms existing material editing and transfer methods on this new task.
翻译:我们提出了材料涂层这一新颖的图像编辑任务,其旨在模拟在物体表面施加一层薄材料层,同时保留其底层粗糙与精细的几何结构。材料涂层与现有的‘材料迁移’方法有本质区别,后者旨在替换物体的固有材料,常常会覆盖精细细节。为应对这一新任务,我们构建了一个大规模合成数据集DataCoat110K(包含11万张图像),其中包含具有多样化、基于物理的涂层的3D物体。随后,我们提出了CoatFusion,这是一种新颖的架构,通过将扩散模型同时以二维反照率纹理和颗粒化的、基于物理渲染(PBR)风格的参数控制(包括粗糙度、金属度、透射率以及关键的厚度参数)为条件,来实现该任务。实验和用户研究表明,CoatFusion能够生成逼真且可控的涂层,并且在这一新任务上显著优于现有的材料编辑和迁移方法。