Skip to content

Commit

Permalink
Merge pull request #497 from gizatechxyz/develop
Browse files Browse the repository at this point in the history
Merge Develop into Main
  • Loading branch information
raphaelDkhn authored Dec 10, 2023
2 parents 524ff0b + 64eea8a commit 93c797b
Show file tree
Hide file tree
Showing 116 changed files with 5,732 additions and 54 deletions.
12 changes: 11 additions & 1 deletion docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,21 @@ All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [Unreleased] - 2023-12-01

## Added
- Reduce LogSum Operator

## [Unreleased] - 2023-12-05

## Added
- Erf Operator

## [Unreleased] - 2023-11-27

## Added
- Reduce Prod Operator
-

## [Unreleased] - 2023-11-20

## Added
Expand Down
2 changes: 2 additions & 0 deletions docs/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,8 @@
* [tensor.is\_nan](framework/operators/tensor/tensor.is\_nan.md)
* [tensor.is_inf](framework/operators/tensor/tensor.is\_inf.md)
* [tensor.not](framework/operators/tensor/tensor.not.md)
* [tensor.erf](framework/operators/tensor/tensor.erf.md)
* [tensor.reduce_log_sum](framework/operators/tensor/tensor.reduce_log_sum.md)
* [Neural Network](framework/operators/neural-network/README.md)
* [nn.relu](framework/operators/neural-network/nn.relu.md)
* [nn.leaky\_relu](framework/operators/neural-network/nn.leaky\_relu.md)
Expand Down
4 changes: 3 additions & 1 deletion docs/framework/compatibility.md
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,8 @@ You can see below the list of current supported ONNX Operators:
| [IsNaN](operators/tensor/tensor.is\_nan.md) | :white\_check\_mark: |
| [IsInf](operators/tensor/tensor.is\_inf.md) | :white\_check\_mark: |
| [Not](operators/tensor/tensor.not.md) | :white\_check\_mark: |
| [ReduceLogSum](operators/tensor/tensor.reduce\_log\_sum.md) | :white\_check\_mark: |
| [Erf](operators/tensor/tensor.erf.md) | :white\_check\_mark: |


Current Operators support: **95/156 (60%)**
Current Operators support: **96/156 (62%)**
1 change: 1 addition & 0 deletions docs/framework/numbers/fixed-point/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ use orion::numbers::fixed_point::core::FixedTrait;
| [`fp.sinh`](fp.sinh.md) | Returns the value of the hyperbolic sine of the fixed point number. |
| [`fp.tanh`](fp.tanh.md) | Returns the value of the hyperbolic tangent of the fixed point number. |
| [`fp.sign`](fp.sign.md) | Returns the element-wise indication of the sign of the input fixed point number. |
| [`fp.erf`](fp.erf.md) | The error function of the input fixed point number computed element-wise.|

### Arithmetic & Comparison operators

Expand Down
30 changes: 30 additions & 0 deletions docs/framework/numbers/fixed-point/fp.erf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# fp.erf

```rust
fn erf(self: T) -> T;
```

Returns the error function of the input fixed point number computed element-wise.

## Args

* `self`(`T`) - The input fixed point

## Returns

The error function of the input fixed point number computed element-wise.

## Examples

```rust
use orion::numbers::{FP16x16, FP16x16Impl, FixedTrait};

fn erf_fp_example() -> FP16x16 {
// We instantiate fixed point here.
let fp = FixedTrait::new(65536, false);

// We can call `erf` function as follows.
fp.erf()
}
>>> {mag: 55227, sign: false} // = -1
```
22 changes: 11 additions & 11 deletions docs/framework/operators/neural-network/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,15 +23,15 @@ Orion supports currently these `NN` types.

| function | description |
| --- | --- |
| [`nn.relu`](nn.relu.md) | Applies the rectified linear unit function element-wise. |
| [`nn.leaky_relu`](nn.leaky\_relu.md) | Applies the leaky rectified linear unit (Leaky ReLU) activation function element-wise. |
| [`nn.sigmoid`](nn.sigmoid.md) | Applies the Sigmoid function to an n-dimensional input tensor. |
| [`nn.softmax`](nn.softmax.md) | Computes softmax activations. |
| [`nn.logsoftmax`](nn.logsoftmax.md) | Applies the natural log to Softmax function to an n-dimensional input Tensor. |
| [`nn.softsign`](nn.softsign.md) | Applies the Softsign function element-wise. |
| [`nn.softplus`](nn.softplus.md) | Applies the Softplus function element-wise. |
| [`nn.linear`](nn.linear.md) | Performs a linear transformation of the input tensor using the provided weights and bias. |
| [`nn.hard_sigmoid`](nn.hard\_sigmoid.md) | Applies the Hard Sigmoid function to an n-dimensional input tensor. |
| [`nn.thresholded_relu`](nn.thresholded\_relu.md) | Performs the thresholded relu activation function element-wise. |
| [`nn.gemm`](nn.gemm.md) | Performs General Matrix multiplication. |
| [`nn.relu`](nn.relu.md) | Applies the rectified linear unit function element-wise. |
| [`nn.leaky_relu`](nn.leaky\_relu.md) | Applies the leaky rectified linear unit (Leaky ReLU) activation function element-wise. |
| [`nn.sigmoid`](nn.sigmoid.md) | Applies the Sigmoid function to an n-dimensional input tensor. |
| [`nn.softmax`](nn.softmax.md) | Computes softmax activations. |
| [`nn.logsoftmax`](nn.logsoftmax.md) | Applies the natural log to Softmax function to an n-dimensional input Tensor. |
| [`nn.softsign`](nn.softsign.md) | Applies the Softsign function element-wise. |
| [`nn.softplus`](nn.softplus.md) | Applies the Softplus function element-wise. |
| [`nn.linear`](nn.linear.md) | Performs a linear transformation of the input tensor using the provided weights and bias. |
| [`nn.hard_sigmoid`](nn.hard\_sigmoid.md) | Applies the Hard Sigmoid function to an n-dimensional input tensor. |
| [`nn.thresholded_relu`](nn.thresholded\_relu.md) | Performs the thresholded relu activation function element-wise. |
| [`nn.gemm`](nn.gemm.md) | Performs General Matrix multiplication. |

Expand Down
2 changes: 2 additions & 0 deletions docs/framework/operators/tensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,8 @@ use orion::operators::tensor::TensorTrait;
| [`tensor.is_nan`](tensor.is\_nan.md) | Returns which elements of the input are NaN. |
| [`tensor.is_inf`](tensor.is\_inf.md) | Maps infinity to true and other values to false. |
| [`tensor.not`](tensor.not.md) | Computes the logical negation of all elements in the input tensor. |
| [`tensor.reduce_log_sum`](tensor.reduce\_log\_sum.md) | Computes the log sum of the input tensor's elements along the provided axes. |
| [`tensor.erf`](tensor.erf.md) | Computes the error function of the given input tensor element-wise. |

## Arithmetic Operations

Expand Down
48 changes: 48 additions & 0 deletions docs/framework/operators/tensor/tensor.erf.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
## tensor.erf

```rust
fn erf(self: @Tensor<T>) -> Tensor<T>;
```

Computes the mean of the input tensor's elements along the provided axes.

## Args

* `self`(`@Tensor<T>`) - The input tensor.

## Returns

A new `Tensor<T>` of the same shape as the input tensor with
the the error function of the input tensor computed element-wise.

## Type Constraints

Constrain input and output types to fixed point tensors.

## Examples

```rust
use core::array::{ArrayTrait, SpanTrait};

use orion::operators::tensor::{TensorTrait, Tensor, FP16x16Tensor};
use orion::numbers::{FixedTrait, FP16x16};

fn erf_example() -> Tensor<FP16x16> {
// The erf inputs is [1.0, 0.134, 0.520, 2.0, 3.5, 5.164]
let tensor = TensorTrait::<FP16x16>::new(
shape: array![6].span(),
data: array![
FixedTrait::new_unscaled(65536, false),
FixedTrait::new_unscaled(8832, false),
FixedTrait::new_unscaled(34079, false),
FixedTrait::new_unscaled(131072, false),
FixedTrait::new_unscaled(229376, false),
FixedTrait::new_unscaled(338428, false),
]
.span(),
);

return tensor.erf();
}
>>> [55227,9560,35252,65229,65536,65536]
```
45 changes: 45 additions & 0 deletions docs/framework/operators/tensor/tensor.reduce_log_sum.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
## tensor.reduce_log_sum

```rust
fn reduce_log_sum(self: @Tensor<T>, axis: usize, keepdims: bool) -> Tensor<T>;
```

Computes the log sum of the input tensor's elements along the provided axes.
## Args

* `self`(`@Tensor<T>`) - The input tensor.
* `axis`(`usize`) - The dimension to reduce.
* `keepdims`(`bool`) - If true, retains reduced dimensions with length 1.

## Panics

* Panics if axis is not in the range of the input tensor's dimensions.

## Returns

A new `Tensor<T>` instance with the specified axis reduced by summing its elements.

fn reduce_log_sum() -> Tensor<u32> {

let mut sizes = ArrayTrait::new();
sizes.append(2);
sizes.append(2);
sizes.append(2);

let mut data = ArrayTrait::new();
data.append(FixedTrait::new_unscaled(1, false));
data.append(FixedTrait::new_unscaled(2, false));
data.append(FixedTrait::new_unscaled(3, false));
data.append(FixedTrait::new_unscaled(4, false));
data.append(FixedTrait::new_unscaled(5, false));
data.append(FixedTrait::new_unscaled(6, false));
data.append(FixedTrait::new_unscaled(7, false));
data.append(FixedTrait::new_unscaled(8, false));

let tensor = TensorTrait::<FP16x16>::new(sizes.span(), data.span());

We can call `reduce_log_sum` function as follows.
return tensor.reduce_log_sum(axis: 2, keepdims: false);
}
>>> [[0x11938, 0x1f203], [0x265d9, 0x2b540]]
```
34 changes: 34 additions & 0 deletions nodegen/node/erf.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
import numpy as np
from math import erf
from nodegen.node import RunAll
from ..helpers import make_test, to_fp, Tensor, Dtype, FixedImpl


class Erf(RunAll):

@staticmethod
def erf_fp8x23():
x = np.asarray([0.12, -1.66, 3.4, 4.8, 2.7]).astype(np.float64).reshape(1,5)
y = np.asarray([erf(value) for value in x[0]]).astype(np.float64).reshape(1,5)

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "erf_fp8x23"
make_test([x], y, "input_0.erf()", name)


@staticmethod
def erf_fp16x16():
x = np.asarray([0.12, -1.66, 3.4, 4.8, 2.7]).astype(np.float64).reshape(1,5)
y = np.asarray([erf(value) for value in x[0]]).astype(np.float64).reshape(1,5)

x = Tensor(Dtype.FP16x16, x.shape, to_fp(
x.flatten(), FixedImpl.FP16x16))
y = Tensor(Dtype.FP16x16, y.shape, to_fp(
y.flatten(), FixedImpl.FP16x16))

name = "erf_fp16x16"
make_test([x], y, "input_0.erf()", name)
117 changes: 117 additions & 0 deletions nodegen/node/reduce_log_sum.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
import numpy as np
from nodegen.node import RunAll
from ..helpers import make_test, to_fp, Tensor, Dtype, FixedImpl
import numpy as np


class Reduce_log_sum(RunAll):
@staticmethod
def reduce_log_sum_fp8x23():
def reduce_log_sum_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=False))

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp8x23_export_do_not_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(2, false)", name)

def reduce_log_sum_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=True))

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp8x23_export_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(2, true)", name)

def reduce_log_sum_axis_0():
shape = [3, 3, 3]
axes = np.array([0], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1), shape)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=True))

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp8x23_export_negative_axes_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(0, true)", name)


reduce_log_sum_export_do_not_keepdims()
reduce_log_sum_export_keepdims()
reduce_log_sum_axis_0()

@staticmethod
def reduce_log_sum_fp16x16():
def reduce_log_sum_export_do_not_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = False
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=False))

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp16x16_export_do_not_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(2, false)", name)

def reduce_log_sum_export_keepdims():
shape = [3, 2, 2]
axes = np.array([2], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=True))


x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp16x16_export_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(2, true)", name)

def reduce_log_sum_axis_0():
shape = [2, 2, 2]
axes = np.array([0], dtype=np.int64)
keepdims = True
x = np.reshape(np.arange(1, np.prod(shape) + 1, dtype=np.float32), shape).astype(np.int64)
y = np.log(np.sum(x, axis=tuple(axes), keepdims=True))

x = Tensor(Dtype.FP8x23, x.shape, to_fp(
x.flatten(), FixedImpl.FP8x23))
y = Tensor(Dtype.FP8x23, y.shape, to_fp(
y.flatten(), FixedImpl.FP8x23))

name = "reduce_log_sum_fp16x16_export_negative_axes_keepdims"
make_test(
[x], y, "input_0.reduce_log_sum(0, true)", name)


reduce_log_sum_export_do_not_keepdims()
reduce_log_sum_export_keepdims()
reduce_log_sum_axis_0()
Loading

0 comments on commit 93c797b

Please sign in to comment.