Skip to content

Collection of recent methods on DNN compression and acceleration

License

Notifications You must be signed in to change notification settings

fanglinpu/EfficientDNNs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

EfficientDNNs

A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs:

  • neural architecture re-designing or searching
    • maintain accuracy, less cost (e.g., #Params, #FLOPs, etc.): MobileNet, ShuffleNet etc.
    • maintain cost, more accuracy: Inception, ResNeXt, Xception etc.
  • pruning (including structured and unstructured)
  • quantization
  • matrix decomposition
  • knowledge distillation

About abbreviation: In the list below, o for oral, w for workshop, s for spotlight, b for best paper.

Papers

NAS (Neural Architecture Search)

Papers-Advesarial Attacks

Papers-Visualization and Interpretability

Papers-Knowledge Distillation

People (in alphabeta order)

People in NAS (in alphabeta order)

Venues

Lightweight DNN Engines/APIs

Related Repos and Websites

News

About

Collection of recent methods on DNN compression and acceleration

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TeX 100.0%