You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many previous works, they show that it's more difficult to do quantization while the batch normalization is fused with convolution due to wider variance of weight range.
So, do you have any comment about quantizing the model with fused batch normalization and convolution if only smarter min/max value range of activation quantization is set?
The text was updated successfully, but these errors were encountered:
In many previous works, they show that it's more difficult to do quantization while the batch normalization is fused with convolution due to wider variance of weight range.
So, do you have any comment about quantizing the model with fused batch normalization and convolution if only smarter min/max value range of activation quantization is set?
The text was updated successfully, but these errors were encountered: