Skip to content

Latest commit

 

History

History

imagenet

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Summary

Model Acc@1 Acc@5 #Params FLOPs Inf Time*
ResNet-50 76.40 93.15 25.6M 4.12G 1.41ms
LIP-ResNet-50 78.19 93.96 23.9M 5.33G 1.88ms
+1.79 +0.81 -6.6% +29.4% +33%
ResNet-101 77.98 93.98 44.5M 7.85G 2.29ms
LIP-ResNet-101 79.33 94.60 42.9M 9.06G 2.77ms
+1.46 +0.62 -3.6% +15.4% +21%
DenseNet-121 75.62 92.56 8.0M 2.88G 1.49ms
LIP-DenseNet-121 76.64 93.16 8.7M 4.13G 1.80ms
+1.02 +0.60 +8.8% +43.4% +21%

* average inference time of a single image. The results are calculated by repeated inference with batch size 32 on a single Titan XP card.

** LIP models here denote the full LIP architectures with Bottleneck-128 logit modules.

Getting Started

Our training and testing code main.py is modified from PyTorch official example one. You can refer to this for preparing ImageNet dataset and dependencies.

Training

The code main.py is specialized in single node, multiple GPUs training for the faster speed. You can configure settings in train-lip_resnet50 in Makefile and then train LIP-ResNet-50 by simply

make train-lip_resnet50

You can resort to PyTorch official example one if the command above fails. For that, you need to modify the learning rate schedule to be consistent with the paper, i.e., lr decays 10x at the 30, 60, 80-th epoch.

Evaluating

Alike, you can evaluate the models by

make val-lip_resnet50
make val-lip_resnet101
make val-lip_densenet121

You can place our pretrained models in this directory and evaluate them. The results should be

LIP-ResNet-50
Epoch [0] * Acc@1 78.186 Acc@5 93.964

LIP-ResNet-101
Epoch [0] * Acc@1 79.330 Acc@5 94.602

LIP-DenseNet-121
Epoch [0] * Acc@1 76.636 Acc@5 93.156