Releases: pasqal-io/qadence
v1.2.1
What's Changed
- [Engines, Backends] Add JAX Engine and Horqrux Backend by @dominikandreasseitz in #111
- Bump actions/setup-python from 4 to 5 by @dependabot in #248
- [Fix] Add mitigation tolerance by @dominikandreasseitz in #251
- [Fix] Handling transformation of multi-dimensional input in Transform… by @smitchaudhary in #254
- Include observable feature params in
model.inputs
by @nmheim in #258
New Contributors
- @smitchaudhary made their first contribution in #254
Full Changelog: v1.2.0...v1.2.1
---------- IMPORTANT--------------
This release adds a DIfferentiableBackend
in JAX
(https://jax.readthedocs.io/en/latest/) along with the 'horqrux' backend (https://github.com/pasqal-io/horqrux), a differentiable state vector simulator in JAX. It supports the differentiation modes AD
and GPSR
. The horqrux
backend can be used via the low-level API
only at the moment, which means there is no support for QuantumModels and ml_tools yet.
You can however easily train qadence QuantumCircuits using horqrux/JAX.
See an example how to, below:
(taken from https://github.com/pasqal-io/qadence/blob/main/examples/backends/low_level/horqrux_backend.py.)
from __future__ import annotations
from typing import Callable
import jax.numpy as jnp
import optax
from jax import Array, jit, value_and_grad
from numpy.typing import ArrayLike
from qadence.backends import backend_factory
from qadence.blocks.utils import chain
from qadence.circuit import QuantumCircuit
from qadence.constructors import feature_map, hea, total_magnetization
from qadence.types import BackendName, DiffMode
backend = BackendName.HORQRUX
num_epochs = 10
n_qubits = 4
depth = 1
fm = feature_map(n_qubits)
circ = QuantumCircuit(n_qubits, chain(fm, hea(n_qubits, depth=depth)))
obs = total_magnetization(n_qubits)
for diff_mode in [DiffMode.AD, DiffMode.GPSR]:
bknd = backend_factory(backend, diff_mode)
conv_circ, conv_obs, embedding_fn, vparams = bknd.convert(circ, obs)
init_params = vparams.copy()
optimizer = optax.adam(learning_rate=0.001)
opt_state = optimizer.init(vparams)
loss: Array
grads: dict[str, Array] # 'grads' is the same datatype as 'params'
inputs: dict[str, Array] = {"phi": jnp.array(1.0)}
def optimize_step(params: dict[str, Array], opt_state: Array, grads: dict[str, Array]) -> tuple:
updates, opt_state = optimizer.update(grads, opt_state, params)
params = optax.apply_updates(params, updates)
return params, opt_state
def exp_fn(params: dict[str, Array], inputs: dict[str, Array] = inputs) -> ArrayLike:
return bknd.expectation(conv_circ, conv_obs, embedding_fn(params, inputs))
init_pred = exp_fn(vparams)
def mse_loss(params: dict[str, Array], y_true: Array) -> Array:
expval = exp_fn(params)
return (expval - y_true) ** 2
@jit
def train_step(
params: dict,
opt_state: Array,
y_true: Array = jnp.array(1.0, dtype=jnp.float64),
loss_fn: Callable = mse_loss,
) -> tuple:
loss, grads = value_and_grad(loss_fn)(params, y_true)
params, opt_state = optimize_step(params, opt_state, grads)
return loss, params, opt_state
for epoch in range(num_epochs):
loss, vparams, opt_state = train_step(vparams, opt_state)
print(f"epoch {epoch} loss:{loss}")
final_pred = exp_fn(vparams)
print(
f"diff_mode '{diff_mode}: Initial prediction: {init_pred}, initial vparams: {init_params}"
)
print(f"Final prediction: {final_pred}, final vparams: {vparams}")
print("----------")
v1.2.0
What's Changed
- [Tests] Extra test for local analog rotations by @jpmoutinho in #223
- [Feature] Projector blocks by @Roland-djee in #208
- [Feature] Add semi-local addressing by @vytautas-a in #184
- [Infra] Run pipeline on windows and mac by @awennersteen in #222
- [Refac] Undeprecate total_magnetization by @dominikandreasseitz in #232
- [Docs] Unpin mkdocs by @dominikandreasseitz in #235
- [Testing] Fix addressing flaky test and speed up other by @jpmoutinho in #237
- [Fix] Validate state and parameter batch sizes by @dominikandreasseitz in #212
- [Refactoring, Feature] More modular add interaction and qubit device prototype by @jpmoutinho in #176
- [Fix] Dagger Methods in ControlBlocks and Scaleblock by @vincentelfving in #219
- [BugFix] Fix parameters arithmetic. by @Roland-djee in #243
- [Feature] Allow turning off the semi-local addressing pattern in pyqtorch backend by @jpmoutinho in #244
- [Feature] Adding MLE implementation by @rajaiitp in #233
- [Feature] Analog feature maps by @madagra in #234
Breaking
The QNN constructor now requires an inputs
argument if the number of feature parameters in the given circuit is >1:
fm = qd.kron(
qd.feature_map(2, support=(0,1), param="x"),
qd.feature_map(2, support=(2,3), param="y")
)
ufa = QNN(
qd.QuantumCircuit(4, fm, qd.hea(4,2)),
observable=qd.total_magnetization(4),
inputs = ["x", "y"], # if this is not provided, the constructor will error
)
xs = torch.rand(5,2)
ufa(xs)
If the QNN has <=1 parameter, things work as before.
The inputs
argument is necessary to guarantee the order of variables of the tensors that are passed to the model. Given input tensors xs = torch.rand(batch_size, input_size:=2)
a QNN with inputs=("t", "x")
will assign t, x = xs[:,0], xs[:,1]
.
Features
Analog feature maps
Constructors for creating feature maps with analog blocks or semi-local addressing patterns.
import qadence as qd
# analog feature map with RX rotation and Chebyshev basis
# number of qubits is not specified since the qubit support is global
fm_analog = qd.analog_feature_map(fm_type=qd.BasisSet.CHEBYSHEV, op=qd.AnalogRX)
# feature map with semi-local addressing and custom weights on each qubit
n_qubits = 4
fm_semilocal = qd.rydberg_feature_map(n_qubits, weights=[0.25, 0.25, 0.5, 0.0])
Semi-local addressing patterns
Semi-local addressing patterns can be created by either specifying fixed values for the weights of the qubits being addressed or defining them as trainable parameters that can be optimized later in some training loop.
import torch
from qadence.analog import AddressingPattern
n_qubits = 3
# constant weights
w_det = {0: 0.9, 1: 0.5, 2: 1.0}
w_amp = {0: 0.1, 1: 0.4, 2: 0.8}
det = 9.0
amp = 6.5
# trainable weights
w_amp_tr = {i: f"w_amp{i}" for i in range(n_qubits)}
w_det_tr = {i: f"w_det{i}" for i in range(n_qubits)}
amp_tr = "max_amp"
det_tr = "max_det"
# creating pattern
pattern = AddressingPattern(
n_qubits=n_qubits,
det=det,
amp=amp,
weights_det=w_det,
weights_amp=w_amp,
)
# pattern specified when creating device instance
device_specs = IdealDevice(pattern=pattern)
reg = Register.line(
n_qubits,
spacing=8.0,
device_specs=device_specs,
)
New Contributors
- @vincentelfving made their first contribution in #219
- @rajaiitp made their first contribution in #233
Full Changelog: v1.1.1...v1.2.0
v1.1.1
v1.1.0
What's Changed
- [Feature] Error mitigation structure + ZNE for Pulser backend by @Roland-djee in #105
- [Feature] Add coords scaling and distance calculation directly to Register by @jpmoutinho in #186
- [Fix] Control breaking changes in pulser backend spacing config by @jpmoutinho in #191
- [Feature] Identity-initialized QNN by @n-toscano in #157
- [Feature] Pyqtorch - First Order Adjoint Differentiation by @dominikandreasseitz in #155
- [Fix] Export Toffoli gate in operations.py by @madagra in #195
- [Refactoring] Improve mitigation protocol by @Roland-djee in #189
New Contributors
- @n-toscano made their first contribution in #157
Full Changelog: v1.0.6...v1.0.7
v1.0.6
What's Changed
- [Fix] Braket CPHASE always having fixed parameters by @dominikandreasseitz in #162
- [Fix] Emulated analog results and add unit tests by @jpmoutinho in #137
- [Docs] Add pydocstringformatter to precommits by @Roland-djee in #167
- [Docs] Add simplified train loop example to docs by @nmheim in #164
- [Refac] Cleanup imports by @dominikandreasseitz in #168
- [Docs] Update emulated analog docs by @jpmoutinho in #166
- [Feature] Added Rydberg HEA by @madagra in #158
- [Refac] Cleaning up deprecated methods and avoid direct torch imports by @dominikandreasseitz in #171
- [Feature] Implementation of noise protocol. by @Roland-djee and @gvelikova in #128
- [Backends] Bump to pyqtorch v1.0.1 by @dominikandreasseitz in #172
- [Fix] Display QuantumModel by @nmheim in #177
- [FIX] Enable Python 3.11 by fixing dataclass defaults by @awennersteen in #173
New Contributors
- @awennersteen made their first contribution in #173
Full Changelog: v1.0.5...v1.0.6
v1.0.5
What's Changed
- [Refactor] Train loop; make sure qadence runs on GPUs correctly by @dominikandreasseitz in #135
Full Changelog: v1.0.4...v1.0.5
v.1.0.4
What's Changed
- [Bug fix] Fix pulser initial state passing by @jpmoutinho in #147
- [Docs] Fix spurious link. by @Roland-djee in #151
- [Refac] Move to pyqtorch v1.0.0 by @dominikandreasseitz in #130
- [Fix] Printing of nqubit blocks by @dominikandreasseitz in #156
- [Feat] Add transpilation passes to the backend configuration by @madagra in #115
- [Feat] Cloud interface implementation in pulser backend by @vytautas-a in #117
- [Fix] Pass diffmode in execution by @dominikandreasseitz in #159
Full Changelog: v1.0.3...v1.0.4
v1.0.3
What's Changed
- [Docs] Feature maps and QML reorganization by @jpmoutinho in #92
- Fixes on the documentation + tiny code simplifications by @CdeTerra in #113
- [Feat] Add CSWAP to braket backend by @dominikandreasseitz in #118
- Add local RZ gate for pulser backend by @vytautas-a in #114
- [Docs] Fix visualization by @jpmoutinho in #126
- [Feat] Adding a verbosity option for the display of training metrics by @gvelikova in #129
- Release v1.0.3 by @dominikandreasseitz in #136
New Contributors
- @CdeTerra made their first contribution in #113
- @vytautas-a made their first contribution in #114
- @gvelikova made their first contribution in #129
Full Changelog: v1.0.2...v1.0.3
v1.0.2
What's Changed
- [Docs] Fixed and improved QCL example by @madagra in #91
- [Docs] Docs improvements by @Roland-djee in #82
- Bump actions/checkout from 3 to 4 by @dependabot in #95
- [Infra] Fix docs deployment on new tag by @nmheim in #98
- [Fix] TransformedModule serialization by @dominikandreasseitz in #103
- [Feat] Add dagger method to QuantumCircuit by @dominikandreasseitz in #78
- [Docs] Added install from source paragraph by @madagra in #106
- [Fix] corner case of
chain_single_qubit_ops
by @nmheim in #107 - [Feat] Generalizing and updating the feature_map constructor function by @madagra in #46
- [Docs] Improve README by @dominikandreasseitz in #109
- [Release] v0.1.2 by @dominikandreasseitz in #112
New Contributors
- @dependabot made their first contribution in #95
Full Changelog: v1.0.1...v1.0.2
v1.0.1
What's Changed
- Display correct pip install command in README and index.md by @dominikandreasseitz in #69
- [Testing] Use pytest-xdist by @dominikandreasseitz in #70
- [Docs] Fix links to tutorials in README.md by @madagra in #75
- [Documentation] Add CONTRIBUTING and CODE_OF_CONDUCT by @dominikandreasseitz in #73
- [Infra] Add pypi and license badges by @dominikandreasseitz in #76
- [Docs] Minor docs fixes by @nmheim in #77
- [Testing] Remove test_notebooks by @dominikandreasseitz in #80
- [Bugfix] Use loss.item() in train_with_grad without dataloader by @dominikandreasseitz in #88
- [Release] v1.0.1 by @dominikandreasseitz in #89
Full Changelog: v1.0.0...v1.0.1