This folder contains the implementation of DGMN2 for object detection with Sparse R-CNN.
Method | Backbone | Lr schd | AP | Config | Download |
---|---|---|---|---|---|
Sparse R-CNN | DGMN2-Small | 3x | 48.2 | config | model |
Clone the repository locally:
git clone https://github.com/fudan-zvg/DGMN2
a. Install Detectron2 following the official instructions. Here we use Detectron2 0.4.
b. Install PyTorch Image Models. Here we use PyTorch Image Models 0.4.5.
pip install timm==0.4.5
c. Build the extension.
cd dcn
python setup.py build_ext --inplace
First, prepare COCO dataset according to the guidelines in Detectron2.
Then, download the weights pretrained on ImageNet, and put them in a folder pretrained/
.
To train DGMN2-Small + Sparse R-CNN using 300 learnable proposals on COCO train2017 on a single node with 8 GPUs for 36 epochs run:
python train_net.py --num-gpus 8 --config-file configs/sparsercnn.dgmn2small.300pro.3x.yaml
To evaluate DGMN2-Tiny + RetinaNet on COCO val2017 on a single node with 8 GPUs run:
python train_net.py --num-gpus 8 --config-file --config-file configs/sparsercnn.dgmn2small.300pro.3x.yaml --eval-only MODEL.WEIGHTS path/to/checkpoint_file