Releases: johnolafenwa/TorchFusion-Utils
Releases · johnolafenwa/TorchFusion-Utils
TorchFusion-Utils Alpha Release
Alpha release of TorchFusion Utils
Whats New!
- Mixed Precision Training
Train your pytorch models faster with reduced memory usage. With few lines of code, you can take advantage of nvidia tensor cores for accelerated training of deep learning models.
Example
from torchfusion_utils.fp16 import convertToFP16 #convert your model and optimizer to mixed precision mode model, optim = convertToFP16(model,optim) #in your batch loop, replace loss.backward with optim.backward(loss) optim.backward(loss)
- Initializers
Easy initialize your model with state-of-the-art initializers with fine grained control over layers and parameters.
Example
from torchfusion_utils.initializers import * #initialize the convolution layers with kaiming_normal kaiming_normal_init(model,types=[nn.Conv2d]) #initialize the linear layers with normal normal_init(model,types=[nn.Linear]) #initialize batchnorm weights with ones ones_init(model,types=[nn.BatchNorm2d],category="weight") #initialize batchnorm bias with zeros zeros_init(model,types=[nn.BatchNorm2d],category="bias")
-
Metrics
Support for popular metrics and simple api to create your custom metrics -
Model Utils
Functions to reliably load and save models and summary function for analyzing the parameters, computational cost and structure of deep learning models
Documentation
Comprehensive documentation from utils.torchfusion.org