Skip to content

[ECCV'24] Official Implementation of "Augmented Neural Fine-Tuning for Efficient Backdoor Purification"

License

Notifications You must be signed in to change notification settings

nazmul-karim170/NFT-Augmented-Backdoor-Purification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you like our project, please give us a star ⭐ on GitHub for the latest update.

arXiv License: MIT

😮 Highlights

💡 Simple, Efficient Backdoor Purification

  • Neural Mask Fine-tuning instead of direct weight fine-tuning
  • A universal data augmentation, MixUp for assuming the validation dataset for fine-tuning
  • Clean accuracy preserving regularizer for better clean test accuracy after purification
  • Extensive Evaluation of different benchmarks

🚩 Updates

Welcome to watch 👀 this repository for the latest updates.

[2023.07.07] : Code for NFT is released

[2023.01.07] : NFT is accepted to ECCV'2024

🛠️ Methodology

  • We propose–Neural mask Fine-tuning (NFT)– with an aim to optimally re-organize the neuron activities in a way that the effect of the backdoor is removed.

  • Utilizing a simple data augmentation like MixUp, NFT relaxes the trigger synthesis process and eliminates the requirement of the adversarial search module, present in previous SOTA.

  • Our study further reveals that direct weight fine-tuning under limited validation data results in poor post-purification clean test accuracy, primarily due to overfitting issue. To overcome this, we propose to fine-tune neural masks instead of model weights.

  • In addition, a mask regularizer has been devised to further mitigate the model drift during the purification process.

  • The distinct characteristics of NFT render it highly efficient in both runtime and sample usage, as it can remove the backdoor even when a single sample is available from each class

PyTorch Implementation

Create Conda Environment

  • Install Anaconda and create an environment

     conda create -n fip-env python=3.10
     conda activate fip-env
  • After creating a virtual environment, run

     pip install -r requirements.txt

Download the Datasets

Create Benign and Backdoor Models

For Cifar10
  • To train a benign model
python train_backdoor_cifar.py --poison-type benign --output-dir /folder/to/save --gpuid 0 
  • To train a backdoor model with the "blend" attack with a poison ratio of 10%
python train_backdoor_cifar.py --poison-type blend --poison-rate 0.10 --output-dir /folder/to/save --gpuid 0 
For GTSRB, tinyImageNet, ImageNet
  • Follow the same training pipeline as Cifar10 and change the trigger size, poison-rate, and data transformations according to the dataset.

  • For ImageNet, you can download pre-trained ResNet50 model weights from PyTorch first, then train this benign model with "clean and backdoor training data" for 20 epochs to insert the backdoor.

For Action Recognition
  • Follow this link to create the backdoor model.
For Object Detection
For 3D Point Cloud Classifier
  • Follow this link to create the backdoor model.
For Language Generation
  • Follow this link to create the backdoor model.

Backdoor Purification using NFT

  • For CIFAR10, To remove the backdoor with 1% clean validation data-

     python Remove_Backdoor.py --poison-type blend --val-frac 0.01 --checkpoint "path/to/backdoor/model" --gpuid 0 
  • Please change the dataloader and data transformations according to the dataset.

  • The Algorithm is the same for all tasks, except the MixUp technique may be slightly different from task to task. For Example,

    • You can follow this paper to apply MixUp in Action Recognition task

    • You can follow this paper for MixUp in Object Detection

    • For Language Generation, follow this paper to apply MixUp.

🚀 Purification Results

Analysis

✏️ Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and a citation 📝.

@article{karim2024augmented,
  title={Augmented Neural Fine-Tuning for Efficient Backdoor Purification},
  author={Karim, Nazmul and Arafat, Abdullah Al and Khalid, Umar and Guo, Zhishan and Rahnavard, Nazanin},
  journal={arXiv preprint arXiv:2407.10052},
  year={2024}
}

About

[ECCV'24] Official Implementation of "Augmented Neural Fine-Tuning for Efficient Backdoor Purification"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages