from scipy.linalg import toeplitz
from primate.trace import hutch
@@ -386,7 +386,7 @@ Performance
= lanczos(T, deg=499, orth=150)
a, b sum(np.abs(eigvalsh_tridiagonal(a,b))) np.
import timeit
lambda: hutch(A, maxiter=20, deg=5, fun="log", quad="fttr"), number = 1000)
timeit.timeit(lambda: np.sum(np.log(np.linalg.eigvalsh(A))), number = 1000) timeit.timeit(
# a, b = 0.8, 2
# x = np.random.uniform(low=0, high=10, size=40)
# eps = np.random.normal(loc=0, scale=1.0, size=40)
diff --git a/docs/basic/usage.html b/docs/basic/usage.html
index b40068d..8ca1bea 100644
--- a/docs/basic/usage.html
+++ b/docs/basic/usage.html
@@ -360,7 +360,7 @@ primate
usage - quickstart
Below is a quick introduction to primate
. For more introductory material, theor
To do trace estimation, use functions in the trace
module:
-
+
import primate.trace as TR
from primate.random import symmetric
= symmetric(150) ## random positive-definite matrix
@@ -370,12 +370,12 @@ A primate
usage - quickstart
print(f"XTrace: {TR.xtrace(A):6f}") ## Epperly's algorithm
Actual trace: 75.697397
-Girard-Hutch: 75.386523
+Girard-Hutch: 75.284468
XTrace: 75.697398
For matrix functions, you can either construct a LinearOperator
directly via the matrix_function
API, or supply a string to the parameter fun
describing the spectral function to apply. For example, one might compute the log-determinant as follows:
-
+
from primate.operator import matrix_function
= matrix_function(A, fun="log")
M
@@ -388,14 +388,14 @@ primate
usage - quickstart
## M = matrix_function(A, fun=np.log)
logdet(A): -148.321844
-GR approx: -147.600540
+GR approx: -148.272202
XTrace: -148.315937
Note in the above example you can supply to fun
either string describing a built-in spectral function or an arbitrary Callable
. The former is preferred when possible, as function evaluations will generally be faster and hutch
can also be parallelized. Multi-threaded execution of e.g. hutch
with arbitrary functions is not currently allowed due to the GIL, though there are options available, see the integration docs for more details.
For ‘plain’ operators, XTrace
should recover the exact trace (up to roundoff error). For matrix functions f(A), there will be some inherent inaccuracy as the underlying matrix-vector multiplication is approximated with the Lanczos method.
In general, the amount of accuracy depends both on the Lanczos parameters and the type of matrix function. Spectral functions that are difficult or impossible to approximate via low-degree polynomials, for example, may suffer more from inaccuracy issues than otherwise. For example, consider the example below that computes that rank:
-
+
## Make a rank-deficient operator
= np.sort(ew)
ew 30] = 0.0
@@ -407,25 +407,25 @@ ew[:primate
usage - quickstart
print(f"XTrace: {TR.xtrace(M)}")
Rank: 120
-GR approx: 145.9887237548828
+GR approx: 145.36611938476562
XTrace: 143.97018151807674
This is not so much a fault of hutch
or xtrace
as much as it is the choice of approximation and Lanczos parameters. The sign
function has a discontinuity at 0, is not smooth, and is difficult to approximate with low-degree polynomials. One workaround to handle this issue is relax the sign function with a low-degree “soft-sign” function: \mathcal{S}_\lambda(x) = \sum\limits_{i=0}^q \left( x(1 - x^2)^i \prod_{j=1}^i \frac{2j - 1}{2j} \right)
Visually, the soft-sign function looks like this:
-
+
from primate.special import soft_sign, figure_fun
"smoothstep")) show(figure_fun(
-
+