Our DiffRetouch utilizes the excellent distribution modeling ability of diffusion to capture the complex fine-retouched distribution covering various visual-pleasing styles.
Note: Regardless of how the coefficients are adjusted within the specified range, the final result tends to be sampled from the learned fine-tuned distribution.
Abstract
Image retouching aims to enhance the visual quality of photos. Considering the different aesthetic preferences of users, the target of retouching is subjective. However, current retouching methods mostly adopt deterministic models, which not only neglects the style diversity in the expert-retouched results and tends to learn an average style during training, but also lacks sample diversity during inference. In this paper, we propose a diffusion-based method, named DiffRetouch. Thanks to the excellent distribution modeling ability of diffusion, our method can capture the complex fine-retouched distribution covering various visual-pleasing styles in the training data. Moreover, four image attributes are made adjustable to provide a user-friendly editing mechanism. By adjusting these attributes in specified ranges, users are allowed to customize preferred styles within the learned fine-retouched distribution. Additionally, the affine bilateral grid and contrastive learning scheme are introduced to handle the problem of texture distortion and control insensitivity respectively. Extensive experiments have demonstrated the superior performance of our method on visually appealing and sample diversity.
Method
Pipeline of our DiffRetouch.
(a) We propose a diffusion-based retouching method, to cover the fine-retouched distribution, along with four adjustable attributes to edit the final results.
(b) The bilateral grid is introduced into the diffusion model to overcome the texture
distortion caused by the information loss in the encoding and decoding process.
(c) To address the control insensitivity, we design the contrastive learning scheme to
encourage models more aware of the adjustment brought by each coefficient.
Results
BibTex
@inproceedings{duan2024diffretouch,
title={DiffRetouch: Using Diffusion to Retouch on the Shoulder of Experts},
author={Duan, Zheng-Peng and Zhang, Jiawei and Lin, Zheng and Jin, Xin and Wang, Xundong and Zou, Dongqing and Guo, Chunle and Li, Chongyi},
journal={AAAI},
year={2025}
}
Contact
Feel free to contact us at adamduan0211[AT]gmail.com!