Releases: zama-ai/concrete-ml
Releases · zama-ai/concrete-ml
v0.1.0
Summary
First release of Concrete-ML package
Links
Docker Image: zamafhe/concrete-ml:v0.1.0
pip: https://pypi.org/project/concrete-ml/0.1.0
Documentation: https://docs.zama.ai/concrete-ml/0.1.0
v0.1.0
Feature
- Add tests for more torch functions that are supported, mention them in the docs (
0478854
) - Add FHE in xgboost notebook (
1367d4e
) - Make all classifier demos run in FHE for the datasets and in VL for the domain grid (
d95af58
) - Remove workaround reshape and remaining 3dmatmul (
28ea1eb
) - Change predict to predict_proba for average_precision (
a057881
) - Allow FHE on xgboost (
7b5c118
) - Add CNN notebook (
4acca2f
) - Optimize QuantizeAdd to use TLUs when one of the inputs is a constant(
1ffcdfb
) - Different n_bits for weights/activations/outputs (
321d151
) - Add virtual lib management to SklearnLinearModelMixin (
596d16e
) - Add quantized CNN. (
1a78593
) - Start refactoring tree based models (
8e62cf8
) - Set symmetric quantization by default in PTQ (
8fcd307
) - Add random forest + benchmark (
5630f17
) - Allow base_score with xgboost (
17d5cc4
) - Add predict_proba to logistic regression (
9aaeec5
) - Add xgboost (
699603d
) - Add NN regression benchmarks (
9de2ba4
) - Add symetric quantization (needed for tree output) (
4a173ee
) - Implement LinearSVC (
d048077
) - Implement LinearSVRegression (
36df77e
) - Remove identity nodes from ONNX models (
9719c08
) - Add binary + multiclass logistic regression (
85c25df
) - Improve r2 test for low variance targets. (
44ec0b3
) - Add sklearn linear regression model (
060a4c6
) - Add virtual lib basic class (
ad32509
) - Improve NN benchmarks (
ae8313e
) - Add NN benchmarks and sklearn wrapper for FHE NNs (
e73a514
) - More efficient numpy_gemm, since traced (
609f1df
) - Integrate hummingbird (
01c3a4a
) - Add ONNX quantized implementation for MatMul and Add (
716fc43
) - Allow multiple inputs for a QuantizedModule (
1fa530d
) - Allow QuantizedModule to handle complicated NN topologies (
da91e40
) - Let's allow (alpha, beta) == (1, 0) in Gemm (
4b9927a
) - Manage constant folding in PTQ (
a0c56d7
) - Replace numpy.isclose with r2 score (
65f0a6e
) - Replace the torch quantization functions with ones usable with ONNX (
ecdeb50
) - Add test when input is float to quantized module (
d58910d
) - Let user chose its error type (
e5d7440
) - Post training quantization for ONNX repr (
8b051df
) - Adding more activations and numpy functions (
73d885c
) - Let's have relu and relu6 (
f64c3bf
) - Add quantized tanh (
ca9c6e5
) - Add classification benchmarks, fix bugs in DecisionTreeClassifier (
d66d7bf
) - Provide quantized versions of ONNX ops (
b63eca2
) - Add darglint as a pluggin of flake8 (
bb568e2
) - Use ONNX as intermediate format to convert torch models to numpy (
072bd63
) - Add decision trees + update notebook (
db163f5
) - Restore quantized model benchmarks (
d1cfc4e
) - Port quantization and torch from concrete-numpy. (
a525e8b
)
Fix
- Remove fixmes, add HardSigmoid (
847db99
) - Docs (
8096acc
) - Safer default parameter for ensemble methods (
8da0988
) - Increase n_bits for clear vs quantized comparison for decision tree (
b9f1206
) - Fix notebook on macOS + some warnings (
ab2a821
) - Xgboost handle the edge case where n_estimators = 1 (
3673584
) - Issues in Classifier Comparison notebook (
3053085
) - One more bug about convergence (
c6cee4e
) - Fix convergence issues in tests (
7b92bd8
) - Remove metric evaluation for n_bits < 16 (
7c4bd0e
) - Wrong xgboost init (
2ed49b6
) - Workaround while #518 is being investigated (
7f521f9
) - Looks like a mistake (
69e9b15
) - Speedup qnn tests (
9d07f5c
) - Workaround for segfaults on macOS (
798662f
) - Remove check_r2_score with argmax predictions (
7d52750
) - Review (
82abb12
) - Fully connected notebook ([
1f7b92e
](1f7b92e2623ebf45...