Implementing Network Pruning ( Weight and Unit Pruning ) from Scratch giving different percent sparcity to the network.
Training of weights is done on MNIST digit dataset. Activation function Relu is used withing the layers. Hidden Layers of 1000 1000 500 200 are used. Weight and Unit pruning was iteratively done with percent pruning of [0, 25, 50, 60, 70, 80, 90, 95, 97, 99]