Skip to content

A collection of papers on neural field-based inverse rendering.

License

Notifications You must be signed in to change notification settings

ingra14m/Awesome-Inverse-Rendering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

31 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Awesome-Inverse-Rendering

This repo collects papers that use implicit representation for inverse rendering.

Part of this repo is referenced from Awesome-InverseRendering. Thanks to the author for the outstanding contribution.

Q: What type of papers should be included in this repository?

A: We require papers that explicitly include a rendering equation in their methods. For example, even though methods like GS-DR involve parameters like albedo, roughness, and environment maps, they do not utilize a rendering equation and therefore are not considered strict inverse rendering techniques.

On the other hand, paper like NeRO, which addresses reflective scenes by incorporating the rendering equation (e.g., through the split-sum approximation), falls within the scope of our repository.

Table of contents

What is (Neural) Inverse Rendering

Inverse rendering often involves the use of neural networks to approximate the mapping from images to the underlying 3D scene properties. This can include:

  1. Geometry Estimation: Reconstructing the 3D shape or surface of the objects in the scene.
  2. Material and Texture Estimation: Determining the surface properties, such as albedo, roughness, and metallic of the objects.
  3. Lighting Estimation: Inferring the lighting conditions, including the positions and intensities of light sources that illuminate the scene.

Dataset

1. OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects

Authors: Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu, Hao Su

Publication: NeurIPS 2023 Datasets and Benchmarks

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark

Authors: Zhengfei Kuang, Yunzhi Zhang, Hong-Xing Yu, Samir Agarwala, Shangzhe Wu, Jiajun Wu

Publication: NeurIPS 2023 Datasets and Benchmarks

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


NeRF-based Inverse Rendering

1. NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis

Authors: Pratul P. Srinivasan, Boyang Deng, Xiuming Zhang, Matthew Tancik, Ben Mildenhall, Jonathan T. Barron

Publication: CVPR 2021

Note: Assume known environment illumination. Normal estimation from density is from here.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code (not yet)

2. NeRD: Neural Reflectance Decomposition from Image Collections

Authors: Mark Boss, Raphael Braun, Varun Jampani, Jonathan T. Barron, Ce Liu, Hendrik P.A. Lensch

Publication: ICCV 2021

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

3. Neural Ray-Tracing: Learning Surfaces and Reflectance for Relighting and View Synthesis

Authors: Julian Knodt, Joe Bartusek, Seung-Hwan Baek, Felix Heide

Publication: Arxiv 2021

πŸ“„ Paper | πŸ’» Code

4. NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination

Authors: Xiuming Zhang, Pratul P. Srinivasan, Boyang Deng, Paul Debevec, William T. Freeman, Jonathan T. Barron

Publication: SIGGRAPH Asia 2021

Note: It is undoubtedly an awesome piece of work. "Normal Smoothness" originates from this article. My only regret is that I think the authors were aware that the IR framework based on NeRF struggles to decouple shadows and materials, but they did not mention this in the Limitations section. Their scenarios of reducing light intensity (Not the same as vanilla NeRF scenes with obvious shadow and indirect illumination) still impacted many subsequent works that focused on novel view synthesis rather than the accuracy of material decoupling. This is very important for relighting to remove artifacts.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

5. Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition

Authors: Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan T. Barron, Hendrik P.A. Lensch

Publication: NeurIPS 2021

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

6. PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo

Authors: Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong

Publication: ECCV 2022

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

7. NeILF: Neural Incident Light Field for Physically-based Material Estimation

Authors: Yao Yao, Jingyang Zhang, Jingbo Liu, Yihang Qu, Tian Fang, David McKinnon, Yanghai Tsin, Long Quan

Publication: ECCV 2022

πŸ“„ Paper | πŸ’» Code

8. L-Tracing: Fast Light Visibility Estimation on Neural Surfaces by Sphere Tracing

Authors: Ziyu Chen, Chenjing Ding, Jianfei Guo, Dongliang Wang, Yikang Li, Xuan Xiao, Wei Wu, Li Song

Publication: ECCV 2022

Note: Visibility estimation without training.

πŸ“„ Paper | πŸ’» Code

9. SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections

Authors: Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani

Publication: NeurIPS 2022

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

10. Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering

Authors: Benjamin Attal, Dor Verbin, Ben Mildenhall, Peter Hedman, Jonathan T. Barron, Matthew O'Toole, Pratul P. Srinivasan

Publication: ECCV 2024 (Oral)

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


NeuS-based Inverse Rendering

1. IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

Authors: Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely

Publication: CVPR 2022 (Oral)

Note: Based on NeuS.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. NeRO: Neural Geometry and BRDF Reconstruction of Reflective Objects from Multiview Images

Authors: Yuan Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, Wenping Wang

Publication: SIGGRAPH 2023

Note: Apply split-sum to constrain the learning of reflective scenes.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

3. SIRe-IR: Inverse Rendering for BRDF Reconstruction with Shadow and Illumination Removal in High-Illuminance Scenes

Authors: Ziyi Yang, Yanzhen Chen, Xinyu Gao, Yazhen Yuan, Yu Wu, Xiaowei Zhou, Xiaogang Jin

Publication: It will never be accepted loll

Note: This is the first paper that directly faces the issue of shadows and materials being unable to be decoupled under strong lighting conditions. The ACES non-linear mapping is crucial for the removal of shadows and indirect illumination.

πŸ“„ Paper | πŸ’» Code

4. Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields (NeP)

Authors: Haoyuan Wang, Wenbo Hu, Lei Zhu, Rynson W.H. Lau

Publication: CVPR 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

5. PBIR-NIE: Glossy Object Capture under Non-Distant Lighting

Authors: Guangyan Cai, Fujun Luan, MiloΕ‘ HaΕ‘an, Kai Zhang, Sai Bi, Zexiang Xu, Iliyan Georgiev, Shuang Zhao

Publication: Arxiv 2024

πŸ“„ Paper


IDR-based Inverse Rendering

1. PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting

Authors: Kai Zhang*, Fujun Luan*, Qianqian Wang, Kavita Bala, Noah Snavely

Publication: CVPR 2021

Note: A successful SG (spherical Gaussian) application in the field of inverse rendering. Unfortunately, the isotropic SG modeling of environmental lighting limits its ability to model anisotropic scenes. Regardless, this is a very awesome work.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. Modeling indirect illumination for inverse rendering (InvRender)

Authors: Yuanqing Zhang, Jiaming Sun1, Xingyi He, Huan Fu, Rongfei Jia, Xiaowei Zhou

Publication: CVPR 2022

Note: Nice constrain on indirect illumination.

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


DMTet-based Inverse Rendering

1. Extracting Triangular 3D Models, Materials, and Lighting From Images (NVDiffrec)

Authors: Jacob Munkberg, Jon Hasselgren, Tianchang Shen, Jun Gao, Wenzheng Chen, Alex Evans, Thomas MΓΌller, Sanja Fidler

Publication: CVPR 2022 (Oral)

πŸ“„ Paper | πŸ’» Code

2. Shape, Light, and Material Decomposition from Images using Monte Carlo Rendering and Denoising (NVDiffrecMC)

Authors: Jon Hasselgren, Nikolai Hofmann, Jacob Munkberg

Publication: NeurIPS 2022

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


DVGO-based Inverse Rendering

1. Neural-PBIR Reconstruction of Shape, Material, and Illumination

Authors: Cheng Sun, Guangyan Cai, Zhengqin Li, Kai Yan, Cheng Zhang, Carl Marshall, Jia-Bin Huang, Shuang Zhao, Zhao Dong

Publication: ICCV 2023

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code (comming soon...)


TensoRF-based Inverse Rendering

1. TensoIR: Tensorial Inverse Rendering

Authors: Haian Jin, Isabella Liu, Peijia Xu, Xiaoshuai Zhang, Songfang Han, Sai Bi, Xiaowei Zhou, Zexiang Xu, Hao Su

Publication: CVPR 2023

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. TensoSDF: Roughness-aware Tensorial Representation for Robust Geometry and Material Reconstruction

Authors: Jia Li, Lu Wang, Lei Zhang, Beibei Wang

Publication: SIGGRAPH 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


Instant-NGP-based-Inverse-Rendering

1. MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling

Authors: Yuxin Dai, Qi Wang, Jingsen Zhu, Dianbing Xi, Yuchi Huo, Chen Qian, Ying He

Publication: Arxiv 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


Diffusion Prior

1. Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering (DPI)

Authors: Linjie Lyu, Ayush Tewari, Marc Habermann, Shunsuke Saito, Michael ZollhΓΆfer, Thomas LeimkΓΌhler, Christian Theobalt

Publication: SIGGRAPH Asia 2023

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination

Authors: Xi Chen, Sida Peng, Dongchen Yang, Yuan Liu, Bowen Pan, Chengfei Lv, Xiaowei Zhou

Publication: ECCV 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code


Point-based Inverse Rendering

From my perspective, I believe that high-quality inverse rendering from a collection of images cannot be achieved with 3D-GS. This is because 3D-GS lacks robust geometry, which is fatal for IR. It directly affects the estimation of visibility, limiting the ablitity of decoupling shadows and materials.

1. Relightable 3D Gaussian: Real-time Point Cloud Relighting with BRDF Decomposition and Ray Tracing

Authors: Jian Gao, Chun Gu, Youtian Lin, Hao Zhu, Xun Cao, Li Zhang, Yao Yao

Publication: ECCV 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

2. GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces

Authors: Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, Yuexin Ma

Publication: CVPR 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

3. GS-IR: 3D Gaussian Splatting for Inverse Rendering

Authors: Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, Kui Jia

Publication: CVPR 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

4. GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization

Authors: Yahao Shi, Yanmin Wu, Chenming Wu, Xing Liu, Chen Zhao, Haocheng Feng, Jingtuo Liu, Liangjun Zhang, Jian Zhang, Bin Zhou, Errui Ding, Jingdong Wang

Publication: Arxiv 2023

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code (Not yet)

5. DeferredGS: Decoupled and Editable Gaussian Splatting with Deferred Shading

Authors: Tong Wu, Jia-Mu Sun, Yu-Kun Lai, Yuewen Ma, Leif Kobbelt, Lin Gao

Publication: Arxiv 2024

πŸ“„ Paper

6. Differentiable Point-based Inverse Rendering

Authors: Hoon-Gyu Chung, Seokjun Choi, Seung-Hwan Baek

Publication: CVPR 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

7. GS-ROR: 3D Gaussian Splatting for Reflective Object Relighting via SDF Priors

Authors: Zuoliang Zhu, Beibei Wang, Jian Yang

Publication: Arxiv 2024

πŸ“„ Paper

8. PRTGS: Precomputed Radiance Transfer of Gaussian Splats for Real-Time High-Quality Relighting

Authors: Yijia Guo, Yuanxi Bai, Liwen Hu, Ziyi Guo, Mianzhi Liu, Yu Cai, Tiejun Huang, Lei Ma

Publication: Arxiv 2024

πŸ“„ Paper

9. Progressive Radiance Distillation for Inverse Rendering with Gaussian Splatting

Authors: Keyang Ye, Qiming Hou, Kun Zhou

Publication: Arxiv 2024

πŸ“„ Paper

10. Subsurface Scattering for 3D Gaussian Splatting

Authors: Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P.A. Lensch

Publication: Arxiv 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

11. Efficient Relighting with Triple Gaussian Splatting

Authors: Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu

Publication: SIGGRAPH ASIA 2024

πŸ“„ Paper | 🌐 Project Page | πŸ’» Code

About

A collection of papers on neural field-based inverse rendering.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published