This repository contains the implementaion for the paper:
Knowledge Distillation Inspired Fine-Tuning Of Tucker Decomposed CNNS and Adversarial Robustness Analysis Ranajoy Sadhukhan, Avinab Saha, Dr. Jayanta Mukhopadhyay, Dr. Amit Patra IEEE International Conference on Image Processing (ICIP) 2020
- Download train-val sets of Image-ILSVRC12 for experiments on ImageNet
- Update data path in run script
conda env update -n tensordecomp -f environment.yml
source activate tensordecomp
pip install -e .
- Put cifar10 and cifar100 files cifar-10-python.tar.gz and cifar-100-python.tar.gz within same folder cifar
The training/testing environment can be initialized using conda as:
conda env update -n tensordecomp -f environment.yml
source activate tensordecomp
pip install -e .
There are two Tensor Decomposition methods implemented here
- CP Decomposition
- Tucker Decomposition
The original paper addresses DeepFool attack only. This repository extends experiments on two more adversarial attacks
We have used Foolbox for the implementations of these attacks
Update the model and dataset information in TensorDecomp/config/default.py
accordingly
cd TensorDecomp
chmod +x run.sh
./run.sh
For decomposing the network, use a pretrained undecomposed checkpoint
Update run.sh
script as follows
python main.py --pretrained --decompose --gpu <device_id>
Update architecture name and the type of loss function in the config file TensorDecomp/config/default.py
In order to use logits loss or KL divergence loss for implementing Knowledge Distillation,
Update run.sh
script as follows
python main.py --pretrained --decompose --teacher --gpu <device_id>
Update _C.SOLVER.LOSS
as 'L2' or 'L1' or 'KD' in TensorDecomp/config/default.py
to implement appropriate loss function.