Official repository for UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior
Project Page | Paper | Video | Code
- June 2025: β¨ Source code has been released!
- June 2025: β¨ UniRestore was accepted into CVPR 2025 Highlight!
UniRestore leverages diffusion prior to unify
and
, achieving both high visual fidelity and task utility.
- PIR enhances visual clarity, but its outputs may not benefit recognition tasks.
- TIR optimizes features for tasks like classification or segmentation, but often compromises visual appeal.
- Create a conda environment and activate it.
conda create -n unirestore python=3.11 -y
conda activatre unirestore
- Clone and enter into repo directory.
git clone https://github.com/unirestore/UniRestore.git
cd UniRestore
- Install remaining dependencies
pip install -r requirements.txt
- Download pretrained UniRestore checkpoints and place them into path (./UniRestore/logs/).
- Stage1 checkpoint
- Stage2 checkpoint
- You can also complete by the command:
cd UniRestore
wget --no-check-certificate 'https://drive.google.com/file/d/1a7c8zL8XXd7m3dDEQWZMpQagnkBpdGnK/view?usp=share_link' -O ./logs/unirestore_stage1.ckpt
wget --no-check-certificate 'https://drive.google.com/file/d/1m2a-8SUtVZaG5ovysJKWqc_mzgGCtDrx/view?usp=share_link' -O ./logs/unirestore_stage2.ckpt
We use a JSON file to collect all the degraded images, ground truth images, and the corresponding annotations. Each entry in the JSON contains a row formatted as: "path_to_lq path_to_hq annotation". Please download the dataset into the corresponding task folder, and then use the following command to create the data list:
cd UniRestore/dataset/$[task]
python process_$[dataset].py
where [task] refers to the task directory (e.g., 'PIR', 'Classification', 'Segmentation'), and the [dataset] specifies the dataset to be processed.
Our method consists of two main training stages:
Focus on restoring features by training the CFRM
, Controller
, and SC-Tuner
modules.
python src/main.py fit --config ./configs/train_stage1.yaml
Adapt restored features and diffusion priors for specific downstream tasks by training the TFA
module.
python src/main.py fit --config ./configs/train_stage2.yaml
π‘ It is recommended to start from Stage 2 with the provided pretrained checkpoint unirestore_s1.ckpt for better adaptation to each task.
To introduce a new task, simply define new task-specific prompts and fine-tune without modifying the main model or accessing previous training data.
- Add your custom task objective at: ./UniRestore/src/core/base
- Modify the
tedit: task
field to add your new task-specific prompt words at configuration. - Implement a new data loader at: ./UniRestore/src/data
- Run fine-tuning with:
python src/main.py fit --config ./configs/train_stage3.yaml
π‘ This process enables flexible and efficient task extension without retraining the full model or accessing previous data.
- For validation dataset:
python ./src/main.py validate --config ./configs/val.yaml --trainer.logger null
By default, the results will be saved to "./logs/unirestore/test". You can customize the inference detail by modifying the configuration file.
If you find this work useful, please consider citing us!
@inproceedings{chen2025unirestore,
title={UniRestore: Unified Perceptual and Task-Oriented Image Restoration Model Using Diffusion Prior},
author={Chen, I and Chen, Wei-Ting and Liu, Yu-Wei and Chiang, Yuan-Chun and Kuo, Sy-Yen and Yang, Ming-Hsuan and others},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={17969--17979},
year={2025}
}