Skip to content
/ DITN Public

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

License

Notifications You must be signed in to change notification settings

yongliuy/DITN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

75 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Yong Liu, Hang Dong, Boyang Liang, Songwei Liu, Qingji Dong, Kai Chen, Fangmin Chen, Lean Fu, Fei Wang

Paper | arXiv | Poster | BibTeX

💖 If our DITN is helpful to your researches or projects, please help star this repository. Thanks! 🤗

visitors GitHub GitHub Stars

Recent years have witnessed a few attempts of vision transformers for single image super-resolution (SISR). Since the high resolution of intermediate features in SISR models increases memory and computational requirements, efficient SISR transformers are more favored. Based on some popular transformer backbone, many methods have explored reasonable schemes to reduce the computational complexity of the self-attention module while achieving impressive performance. However, these methods only focus on the performance on the training platform (e.g., Pytorch/Tensorflow) without further optimization for the deployment platform (e.g., TensorRT). Therefore, they inevitably contain some redundant operators, posing challenges for subsequent deployment in real-world applications. In this paper, we propose a deployment-friendly transformer unit, namely UFONE (i.e., UnFolding ONce is Enough), to alleviate these problems. In each UFONE, we introduce an Inner-patch Transformer Layer (ITL) to efficiently reconstruct the local structural information from patches and a Spatial-Aware Layer (SAL) to exploit the long-range dependencies between patches. Based on UFONE, we propose a Deployment-friendly Inner-patch Transformer Network (DITN) for the SISR task, which can achieve favorable performance with low latency and memory usage on both training and deployment platforms. Furthermore, to further boost the deployment efficiency of the proposed DITN on TensorRT, we also provide an efficient substitution for layer normalization and propose a fusion optimization strategy for specific operators. Extensive experiments show that our models can achieve competitive results in terms of qualitative and quantitative performance with high deployment efficiency.

Update

  • 2023.07.06: Create this repository.

TODO

  • New project website
  • The training scripts
  • The model deployment guide
  • Releasing pretrained models
  • The inference scripts

Requirements

conda create -n ditn python=3.8
conda activate ditn
pip3 install -r requirements.txt

Applications

🏂 Demo on Base Evaluation Dataset

🐳 Demo on Real-world Image SR

Pretrained Models

Running Examples

  • Prepare your test images and run the DITN/test.py with cuda on command line:

🚀 Bicubic Image Super-resolution

DITN/$CUDA_VISIBLE_DEVICES=<GPU_ID> python test.py --scale [2|3|4] --indir [the path of LR images] --outdir [the path of HR results] --model_path [the path of the pretrained model]/DITN_[ |Tiny|Real]_[x2|x3|x4].pth

🏆 Real-world Image Super-resolution

DITN/$CUDA_VISIBLE_DEVICES=<GPU_ID> python test.py --scale [2|3|4] --indir [the path of LR images] --outdir [the path of HR results] --model_path [the path of the pretrained model]/DITN_Real_GAN_[x2|x3|x4].pth

How to Deployment in Realistic Scenarios

  • Coming soon...

Acknowledgement

This work was supported in part by the National Key Research and Development Program of China under Grant 2022YFB3303800, in part by National Major Science and Technology Projects of China under Grant 2019ZX01008101.

License

This project is released under the Apache 2.0 license. Redistribution and use should follow this license.

BibTeX

If you find this project useful for your research, please use the following BibTeX entry.

@inproceedings{liu2023unfolding,
  title={Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution},
  author={Liu, Yong and Dong, Hang and Liang, Boyang and Liu, Songwei and Dong, Qingji and Chen, Kai and Chen, Fangmin and Fu, Lean and Wang, Fei},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={7952--7960},
  year={2023}
}

About

Unfolding Once is Enough: A Deployment-Friendly Transformer Unit for Super-Resolution

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages