IQPopt is a package designed for fast optimization of parameterized Instantaneous quantum polynomial (IQP) circuits using JAX.
Install with
pip install .
or in editable mode with
pip install -e .
The package can be used to optimize parameterized IQP circuits. These are circuits comprised of gates
To define such a circuit (with input state
import iqpopt as iqp
from iqpopt.utils import local_gates
n_qubits = 2
gates = local_gates(n_qubits, 2)
circuit = iqp.IqpSimulator(n_qubits, gates)
Each element of gates
corresponds to a trainable parameter, and is given by a list of lists
that specifies the generators of the parameter.
For example, in the above the function
local_gates
returns gates = [[[0]],[[1]],[[0,1]]]
which specifies three trainable parameters with gate generators
One can also specify some gates to have fixed, non-trainable parameters. For example,
circuit = iqp.IqpSimulator(n_qubits, [[[0]],[[1]]], init_gates = [[[0,1]]], init_coefs=[0.5])
defines a circuit with two trainable gates with generators
Note: For very large problems it can be useful to initialize the circuit with the option
sparse=True
. This uses scipy sparse matrix multiplication in place of JAX and can be significantly more memory efficient.
IQPopt has been designed for fast evaluation of expectation values of Pauli Z tensors.
To estimate the expectation value of a Pauli Z tensor, we represent the operator as a binary string. The estimation
uses a Monte Carlo method whose precision is controlled by n_samples
.
import jax
import jax.numpy as jnp
op = jnp.array([0, 1]) #binary array representing Z_1
params = jnp.ones(len(circuit.gates))
n_samples = 1000
key = jax.random.PRNGKey(42)
expval, std = circuit.op_expval(params, op, n_samples, key)
returns an estimate of
The package also allows for fast batch evaluation of expectation values. If we specify a batch of Z operators by an array
ops = jnp.array([[0,1],[1,0],[1,1]]) #Z_1, Z_0, Z0Z1
we can also batch evaluate the expectation values in parallel:
expvals, stds = circuit.op_expval(params, ops, n_samples, key)
Note: The estimation of each expectation value in the batch is unbiased, however the estimators may be correlated. This effect can be reduced by increasing n_samples in order to reduce the variance of each estimator, or by using the option `indep_estimates=True to return uncorrelated estimates (at the cost of longer runtime).
We can train our circuit with built-in methods. We first define a loss function
def loss(params, circuit, ops, n_samples, key):
expvals = circuit.op_expval(params, ops, n_samples, key)[0]
return jnp.sum(expvals)
The first argument must be named params
and corresponds to the trainable
parameters. We can then train the circuit as follows
import numpy as np
import matplotlib.pyplot as plt
optimizer = "Adam"
stepsize = 0.001
n_iters = 1000
params_init = np.random.normal(0, 1/np.sqrt(n_qubits), len(circuit.gates))
ops = np.array([[1,1], [1,0], [0,1]])
n_samples = 1000
loss_kwargs = {
"params": params_init,
"circuit": circuit,
"ops": ops,
"n_samples": n_samples,
}
trainer = iqp.Trainer(optimizer, loss, stepsize)
trainer.train(n_iters, loss_kwargs)
params = trainer.final_params
plt.plot(trainer.losses)
Automatic stopping of training is possible using the convergence_interval
option of train
; see the docstring for more info.
One can replace the quantum circuit by an analogous bitflipping model described in arxiv:XXXX by initializing the circuit
with the bitflip=True
option:
circuit = iqp.IqpSimulator(n_qubits, gates, bitflip=True)
This can be useful to judge if the IQP model is making use of interference. Since the bitflipping model is classical, one can also
sample from this model for large values of n_qubits
.
We can also view the circuit as a generative model and train it using the maximum mean discrepancy (MMD) distance as a loss function.
import iqpopt.gen_qml as gen
from iqpopt.gen_qml.utils import median_heuristic
n_qubits = 10
#toy dataset of low weight bitstrings
X_train = np.random.binomial(1,0.5, size=(1000, n_qubits))
X_train = X_train[np.where(X_train.sum(axis=1)<5)]
gates = local_gates(n_qubits, 2)
circuit = iqp.IqpSimulator(n_qubits, gates)
params_init = np.random.normal(0, 1/np.sqrt(n_qubits), len(gates))
loss = gen.mmd_loss_iqp #MMD loss
sigma = median_heuristic(X_train) #bandwidth for MMD
loss_kwargs = {
"params": params_init,
"iqp_circuit": circuit,
"ground_truth": X_train,
"sigma": sigma,
"n_ops": 1000,
"n_samples": 1000,
}
trainer = iqp.Trainer("Adam", loss, stepsize=0.01)
trainer.train(n_iters=500, loss_kwargs=loss_kwargs)
params = trainer.final_params
plt.plot(trainer.losses)
The MMD loss is estimated using a Monte Carlo method; larger values of n_ops
and n_samples
result in more precise
estimates. For small circuits, we can generate new samples
samples = circuit.sample(params, shots=100)
For large circuits this is not tractable due to the complexity of sampling from IQP distributions.
To evaluate the model, we can use the MMD distance to a test set, or the Kernel Generalized Empirical Likelihood (KGEL); see Suman Ravuri et al. in Understanding Deep Generative Models with Generalized Empirical Likelihoods.
#test points from same distribution
X_test = np.random.binomial(1,0.5, size=(1000, n_qubits))
X_test = X_test[np.where(X_test.sum(axis=1)<5)]
n_witness = 10
witness_points = X_test[-n_witness:] #witness points for KGEL
test_data = X_test[:-n_witness] #test data for KGEL
kgel, p_kgel = gen.kgel_opt_iqp(circuit, params, witness_points, test_data,
sigma, n_ops=1000, n_samples=1000, key=jax.random.PRNGKey(42))
The parameter repeats
increases the precision of the estimation of the mean embeddings used in the definition
of the KGEL. In practice one often needs a high precision in order for the KGEL optimization to succeed.
p_kgel
is the optimal probability distribution returned from the convex optimization.