Skip to content

Latest commit

 

History

History
56 lines (36 loc) · 2.76 KB

install.md

File metadata and controls

56 lines (36 loc) · 2.76 KB

Set up the Python environment

Initialize python environment by running:

conda create -n instant-nvr python=3.9
conda activate instant-nvr

Then, install pytorch3d=0.7.2 according to the instructions here.

Finally, install other packages by running:

pip install -r requirements.txt

Set up datasets

For both datasets, we refine the camera parameters. See below for further details.

ZJU-MoCap dataset

Since the dataset is licensed, we require the users to agree with some policies before obtaining the dataset. Anyone who wants access to the dataset can fill out this form to obtain the instructions to download the data. You can also alternatively fill in this agreement and email it to Chen Geng with cc to Sida Peng and Xiaowei Zhou to obtain the access.

Please note, even if you have previously downloaded the ZJU-MoCap dataset from our previous work, it is essential to re-download it. We have refined the dataset to include more accurate camera parameters and additional auxiliary files that are crucial for running our code.

If you've sent an email and have not received a response within three days, there's a possibility that your email may have been overlooked. We kindly request you resend the email as a reminder.

After acquiring the link, set up the dataset by:

ROOT=/path/to/instant-nvr
mkdir -p $ROOT/data
cd $ROOT/data
ln -s /path/to/my-zjumocap zju-mocap

MonoCap dataset

Following animatable_nerf, the dataset is composed by DeepCap and DynaCap, which forbids further distribution.

Please download the raw data here and email Chen Geng with cc to Sida Peng for instructions on how to process this dataset.

After successfully obtaining the dataset, set up it by:

ROOT=/path/to/instant-nvr
mkdir -p $ROOT/data
cd $ROOT/data
ln -s /path/to/monocap monocap

Custom Dataset

We have recently uploaded our instruction for processing custom dataset here. It is still in an early stage, and have not been fully tested yet. We need your feedback! Please try it and let us know if you have any questions.