Skip to content

Commit

Permalink
Merge pull request #426 from gizatechxyz/develop
Browse files Browse the repository at this point in the history
Merge Develop into Main
  • Loading branch information
raphaelDkhn authored Nov 5, 2023
2 parents e67dc3d + 3215cc8 commit bc80875
Show file tree
Hide file tree
Showing 297 changed files with 9,780 additions and 174 deletions.
9 changes: 9 additions & 0 deletions .all-contributorsrc
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,15 @@
"contributions": [
"code"
]
},
{
"login": "andresmayorca",
"name": "Andres",
"avatar_url": "https://avatars.githubusercontent.com/u/70079260?v=4",
"profile": "http://andresmayorca.github.io",
"contributions": [
"code"
]
}
],
"contributorsPerLine": 7,
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,5 @@ target
__pycache__

neural_network/.venv

Scarb.lock
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

# Orion: An Open-source Framework for Validity and ZK ML ✨
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-20-orange.svg?style=flat-square)](#contributors-)
[![All Contributors](https://img.shields.io/badge/all_contributors-21-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->

Orion is an open-source, community-driven framework dedicated to Provable Machine Learning. It provides essential components and a new ONNX runtime for building verifiable Machine Learning models using [STARKs](https://starkware.co/stark/).
Expand Down Expand Up @@ -93,6 +93,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<td align="center" valign="top" width="14.28%"><a href="https://github.com/0xfulanito"><img src="https://avatars.githubusercontent.com/u/145947367?v=4?s=100" width="100px;" alt="0xfulanito"/><br /><sub><b>0xfulanito</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=0xfulanito" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/0x73e"><img src="https://avatars.githubusercontent.com/u/132935850?v=4?s=100" width="100px;" alt="0x73e"/><br /><sub><b>0x73e</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=0x73e" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://brilliantblocks.io"><img src="https://avatars.githubusercontent.com/u/25390947?v=4?s=100" width="100px;" alt="Thomas S. Bauer"/><br /><sub><b>Thomas S. Bauer</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=TsBauer" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://andresmayorca.github.io"><img src="https://avatars.githubusercontent.com/u/70079260?v=4?s=100" width="100px;" alt="Andres"/><br /><sub><b>Andres</b></sub></a><br /><a href="https://github.com/gizatechxyz/orion/commits?author=andresmayorca" title="Code">💻</a></td>
</tr>
</tbody>
</table>
Expand Down
20 changes: 0 additions & 20 deletions Scarb.lock

This file was deleted.

6 changes: 6 additions & 0 deletions docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).


## [Unreleased] - 2023-11-03

## Added
- Scatter Elements Operator.

## [Unreleased] - 2023-09-27

## Added
Expand Down
5 changes: 5 additions & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@
* [Tensor](framework/operators/tensor/README.md)
* [tensor.new](framework/operators/tensor/tensor.new.md)
* [tensor.at](framework/operators/tensor/tensor.at.md)
* [tensor.min_in_tensor](framework/operators/tensor/tensor.min_in_tensor.md)
* [tensor.min](framework/operators/tensor/tensor.min.md)
* [tensor.max\_in\_tensor](framework/operators/tensor/tensor.max\_in\_tensor.md)
* [tensor.max](framework/operators/tensor/tensor.max.md)
* [tensor.stride](framework/operators/tensor/tensor.stride.md)
* [tensor.ravel\_index](framework/operators/tensor/tensor.ravel\_index.md)
Expand Down Expand Up @@ -82,6 +84,7 @@
* [tensor.gather](framework/operators/tensor/tensor.gather.md)
* [tensor.quantize\_linear](framework/operators/tensor/tensor.quantize\_linear.md)
* [tensor.dequantize\_linear](framework/operators/tensor/tensor.dequantize\_linear.md)
* [tensor.qlinear\_matmul](framework/operators/tensor/tensor.qlinear\_matmul.md)
* [tensor.nonzero](framework/operators/tensor/tensor.nonzero.md)
* [tensor.squeeze](framework/operators/tensor/tensor.squeeze.md)
* [tensor.unsqueeze](framework/operators/tensor/tensor.unsqueeze.md)
Expand All @@ -90,6 +93,8 @@
* [tensor.identity](framework/operators/tensor/tensor.identity.md)
* [tensor.and](framework/operators/tensor/tensor.and.md)
* [tensor.where](framework/operators/tensor/tensor.where.md)
* [tensor.round](framework/operators/tensor/tensor.round.md)
* [tensor.scatter](framework/operators/tensor/tensor.scatter.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
9 changes: 8 additions & 1 deletion docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ You can see below the list of current supported ONNX Operators:
| [Gather](operators/tensor/tensor.gather.md) | :white\_check\_mark: |
| [QuantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [DequantizeLinear](operators/tensor/tensor.quantize\_linear.md) | :white\_check\_mark: |
| [QLinearMatmul](operators/tensor/tensor.qlinear\_matmul.md) | :white\_check\_mark: |
| [Nonzero](operators/tensor/tensor.nonzero.md) | :white\_check\_mark: |
| [Squeeze](operators/tensor/tensor.squeeze.md) | :white\_check\_mark: |
| [Unsqueeze](operators/tensor/tensor.unsqueeze.md) | :white\_check\_mark: |
Expand All @@ -66,6 +67,12 @@ You can see below the list of current supported ONNX Operators:
| [Xor](operators/tensor/tensor.xor.md) | :white\_check\_mark: |
| [Or](operators/tensor/tensor.or.md) | :white\_check\_mark: |
| [Gemm](operators/neural-network/nn.gemm.md) | :white\_check\_mark: |
| [MinInTensor](operators/tensor/tensor.min\_in\_tensor.md) | :white\_check\_mark: |
| [Min](operators/tensor/tensor.min.md) | :white\_check\_mark: |
| [Where](operators/tensor/tensor.where.md) | :white\_check\_mark: |
| [Round](operators/tensor/tensor.round.md) | :white\_check\_mark: |
| [MaxInTensor](operators/tensor/tensor.max\_in\_tensor.md) | :white\_check\_mark: |
| [Max](operators/tensor/tensor.max.md) | :white\_check\_mark: |
| [Scatter](operators/tensor/scatter.max.md) | :white\_check\_mark: |

Current Operators support: **61/156 (39%)**
Current Operators support: **68/156 (43%)**
5 changes: 5 additions & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.xor`](tensor.xor.md) | Computes the logical XOR of two tensors element-wise. |
| [`tensor.stride`](tensor.stride.md) | Computes the stride of each dimension in the tensor. |
| [`tensor.onehot`](tensor.onehot.md) | Produces one-hot tensor based on input. |
| [`tensor.max_in_tensor`](tensor.max\_in\_tensor.md) | Returns the maximum value in the tensor. |
| [`tensor.min_in_tensor`](tensor.min\_in\_tensor.md) | Returns the minimum value in the tensor. |
| [`tensor.min`](tensor.min.md) | Returns the minimum value in the tensor. |
| [`tensor.max`](tensor.max.md) | Returns the maximum value in the tensor. |
| [`tensor.reduce_sum`](tensor.reduce\_sum.md) | Reduces a tensor by summing its elements along a specified axis. |
Expand Down Expand Up @@ -77,6 +79,7 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.concat`](tensor.concat.md) | Concatenate a list of tensors into a single tensor. |
| [`tensor.quantize_linear`](tensor.quantize\_linear.md) | Quantizes a Tensor to i8 using linear quantization. |
| [`tensor.dequantize_linear`](tensor.dequantize\_linear.md) | Dequantizes an i8 Tensor using linear dequantization. |
| [`tensor.qlinear_matmul`](tensor.qlinear\_matmul.md) | Performs the product of two quantized i8 Tensors. |
| [`tensor.gather`](tensor.gather.md) | Gather entries of the axis dimension of data. |
| [`tensor.nonzero`](tensor.nonzero.md) | Produces indices of the elements that are non-zero (in row-major order - by dimension). |
| [`tensor.squeeze`](tensor.squeeze.md) | Removes dimensions of size 1 from the shape of a tensor. |
Expand All @@ -86,6 +89,8 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.and`](tensor.and.md) | Computes the logical AND of two tensors element-wise. |
| [`tensor.identity`](tensor.identity.md) | Return a Tensor with the same shape and contents as input. |
| [`tensor.where`](tensor.where.md) | Return elements chosen from x or y depending on condition. |
| [`tensor.round`](tensor.round.md) | Computes the round value of all elements in the input tensor. |
| [`tensor.scatter`](tensor.scatter.md) | Produces a copy of input data, and updates value to values specified by updates at specific index positions specified by indices. |

## Arithmetic Operations

Expand Down
56 changes: 43 additions & 13 deletions docs/framework/operators/tensor/tensor.max.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,63 @@
# tensor.max

```rust
fn max(self: @Tensor<T>) -> T;
fn max(tensors: Span<Tensor<T>>) -> Tensor<T>;
```

Returns the maximum value in the tensor.
Returns the element-wise maximum values from a list of input tensors
The input tensors must have either:
* Exactly the same shape
* The same number of dimensions and the length of each dimension is either a common length or 1.

## Args

* `self`(`@Tensor<T>`) - The input tensor.
* `tensors`(` Span<Tensor<T>>,`) - Array of the input tensors

## Returns
## Returns

The maximum `T` value in the tensor.
A new `Tensor<T>` containing the element-wise maximum values

Examples
## Panics

* Panics if tensor array is empty
* Panics if the shapes are not equal or broadcastable

## Examples

Case 1: Process tensors with same shape

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn max_example() -> u32 {
let tensor = TensorTrait::new(
shape: array![2, 2, 2].span(), data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
);
fn max_example() -> Tensor<u32> {
let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span(),);
let tensor2 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 3, 1, 2].span(),);
let result = TensorTrait::max(tensors: array![tensor1, tensor2].span());
return result;
}
>>> [0, 3, 2, 3]

result.shape
>>> (2, 2)
```

Case 2: Process tensors with different shapes

// We can call `max` function as follows.
return tensor.max();
```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn max_example() -> Tensor<u32> {
let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span(),);
let tensor2 = TensorTrait::new(shape: array![1, 2].span(), data: array![1, 4].span(),);
let result = TensorTrait::max(tensors: array![tensor1, tensor2].span());
return result;
}
>>> 7
>>> [1, 4, 2, 4]

result.shape
>>> (2, 2)
```
33 changes: 33 additions & 0 deletions docs/framework/operators/tensor/tensor.max_in_tensor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# tensor.max_in_tensor

```rust
fn max_in_tensor(self: @Tensor<T>) -> T;
```

Returns the maximum value in the tensor.

## Args

* `self`(`@Tensor<T>`) - The input tensor.

## Returns

The maximum `T` value in the tensor.

Examples

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn max_in_tensor_example() -> u32 {
let tensor = TensorTrait::new(
shape: array![2, 2, 2].span(), data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
);

// We can call `max_in_tensor` function as follows.
return tensor.max_in_tensor();
}
>>> 7
```
55 changes: 42 additions & 13 deletions docs/framework/operators/tensor/tensor.min.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,63 @@
# tensor.min

```rust
fn min(self: @Tensor<T>) -> T;
fn min(tensors: Span<Tensor<T>>) -> Tensor<T>;
```

Returns the minimum value in the tensor.
Returns the element-wise minumum values from a list of input tensors
The input tensors must have either:
* Exactly the same shape
* The same number of dimensions and the length of each dimension is either a common length or 1.

## Args

* `self`(`@Tensor<T>`) - The input tensor.
* `tensors`(` Span<Tensor<T>>,`) - Array of the input tensors

## Returns
## Returns

The minimum `T` value in the tensor.
A new `Tensor<T>` containing the element-wise minimum values

## Panics

* Panics if tensor array is empty
* Panics if the shapes are not equal or broadcastable

## Examples

Case 1: Process tensors with same shape

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn min_example() -> u32 {
let tensor = TensorTrait::new(
shape: array![2, 2, 2].span(),
data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
);
fn min_example() -> Tensor<u32> {
let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span(),);
let tensor2 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 3, 1, 2].span(),);
let result = TensorTrait::min(tensors: array![tensor1, tensor2].span());
return result;
}
>>> [0, 1, 1, 2]

result.shape
>>> (2, 2)
```

Case 2: Process tensors with different shapes

// We can call `min` function as follows.
return tensor.min();
```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn min_example() -> Tensor<u32> {
let tensor1 = TensorTrait::new(shape: array![2, 2].span(), data: array![0, 1, 2, 3].span(),);
let tensor2 = TensorTrait::new(shape: array![1, 2].span(), data: array![1, 4].span(),);
let result = TensorTrait::min(tensors: array![tensor1, tensor2].span());
return result;
}
>>> 0
>>> [0, 1, 1, 4]

result.shape
>>> (2, 2)
```
34 changes: 34 additions & 0 deletions docs/framework/operators/tensor/tensor.min_in_tensor.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# tensor.min_in_tensor

```rust
fn min_in_tensor(self: @Tensor<T>) -> T;
```

Returns the minimum value in the tensor.

## Args

* `self`(`@Tensor<T>`) - The input tensor.

## Returns

The minimum `T` value in the tensor.

## Examples

```rust
use array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, U32Tensor};

fn min_in_tensor_example() -> u32 {
let tensor = TensorTrait::new(
shape: array![2, 2, 2].span(),
data: array![0, 1, 2, 3, 4, 5, 6, 7].span(),
);

// We can call `min_in_tensor` function as follows.
return tensor.min_in_tensor();
}
>>> 0
```
Loading

0 comments on commit bc80875

Please sign in to comment.