This library implements
scipy.sparse.linalg.LinearOperator
s
for deep learning matrices, such as
- the Hessian
- the Fisher/generalized Gauss-Newton (GGN)
- the Monte-Carlo approximated Fisher
- the Fisher/GGN's KFAC approximation (Kronecker-Factored Approximate Curvature)
- the uncentered gradient covariance (aka empirical Fisher)
- the output-parameter Jacobian of a neural net and its transpose
Matrix-vector products are carried out in PyTorch, i.e. potentially on a GPU.
The library supports defining these matrices not only on a mini-batch, but
on data sets (looping over batches during a matvec
operation).
You can plug these linear operators into scipy
, while carrying out the heavy
lifting (matrix-vector multiplies) in PyTorch on GPU. My favorite example for
such a routine is
scipy.sparse.linalg.eigsh
that lets you compute a subset of eigen-pairs.
The library also provides linear operator transformations, like taking the inverse (inverse matrix-vector product via conjugate gradients) or slicing out sub-matrices.
Finally, it offers functionality to probe properties of the represented matrices, like their spectral density, trace, or diagonal.
-
Documentation: https://curvlinops.readthedocs.io/en/latest/
-
Bug reports & feature requests: https://github.com/f-dangel/curvlinops/issues
pip install curvlinops-for-pytorch
Other features that could be supported in the future include:
-
Other matrices
- the centered gradient covariance
- terms of the hierarchical GGN decomposition
- SciPy logo: Unknown, CC BY-SA 4.0, via Wikimedia Commons
- PyTorch logo: https://github.com/soumith, CC BY-SA 4.0, via Wikimedia Commons