All-in-one image restoration aims to handle diverse degradations (e.g., noise, blur, adverse weather) within a unified framework, yet existing methods increasingly rely on complex architectures (e.g., Mixture-of-Experts, diffusion models) and elaborate degradation prompt strategies. In this work, we reveal a critical insight: well-crafted feature extraction inherently encodes degradation-carrying information, and a symmetric U-Net architecture is sufficient to unleash these cues effectively. By aligning feature scales across encoder-decoder and enabling streamlined cross-scale propagation, our symmetric design preserves intrinsic degradation signals robustly, rendering simple additive fusion in skip connections sufficient for state-of-the-art performance. Our primary baseline, SymUNet, is built on this symmetric U-Net and achieves better results across benchmark datasets than existing approaches while reducing computational cost. We further propose a semantic enhanced variant, SE-SymUNet, which integrates direct semantic injection from frozen CLIP features via simple cross-attention to explicitly amplify degradation priors. Extensive experiments on several benchmarks validate the superiority of our methods. Both baselines SymUNet and SE-SymUNet establish simpler and stronger foundations for future advancements in all-in-one image restoration. The source code is available at https://github.com/WenlongJiao/SymUNet.
翻译:一体化图像复原旨在通过统一框架处理多种退化问题(例如噪声、模糊、恶劣天气),然而现有方法日益依赖复杂架构(如专家混合模型、扩散模型)和精细的退化提示策略。本研究揭示了一个关键见解:精心设计的特征提取本质上编码了携带退化的信息,而对称U-Net架构足以有效释放这些线索。通过对齐编码器-解码器的特征尺度并实现简化的跨尺度传播,我们的对称设计能稳健地保留内在退化信号,使得跳跃连接中的简单加性融合即可达到最先进的性能。我们的核心基准模型SymUNet基于此对称U-Net构建,在多个基准数据集上取得了优于现有方法的结果,同时降低了计算成本。我们进一步提出了语义增强变体SE-SymUNet,该模型通过简单的交叉注意力机制集成来自冻结CLIP特征的直接语义注入,以显式增强退化先验。在多个基准上的大量实验验证了我们方法的优越性。SymUNet和SE-SymUNet这两个基准模型为未来一体化图像复原的发展奠定了更简洁且更强大的基础。源代码可在https://github.com/WenlongJiao/SymUNet获取。