Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tflite backend #93

Open
wants to merge 125 commits into
base: develop
Choose a base branch
from
Open

Tflite backend #93

wants to merge 125 commits into from

Conversation

neil-tan
Copy link
Member

@neil-tan neil-tan commented Nov 29, 2019

Install

Using a virtual environment is a good idea.

git clone [email protected]:uTensor/utensor_cgen.git
cd utensor_cgen
git checkout tflite_backend
pip install -e .[dev]

Running Test

pytest -s tests/tflm

This is semi-working for:

input0 = [1,1,1,1].tranpose #set at the run time
input1 = [2,4,6,8].tranpose
weight = [10,20,30,40]
bias = [7]

add_output = ADD(input0, input1)
out = FC(add_output, weight, bias)

out == 707

Notes

Unexplained behaviors: when the first tensor in the subgraph, input0, is not set at the runtime, the tensor is then initialized to undefined values instead of values contained in FlatBuffers. See here.

@dboyliao Can use some input on the python coding style/design patterns. Keep in mind, I barely got this to work. Perhaps using registry/factory pattern for additional op supports? Imports are a mess.

The files to look at are:

  • tests/tflm/tflite_export/conftest.py
  • tests/tflm/tflite_export/test_write.py
  • uTensor_cgen/transformer/tflite_exporter.py

There are 3 types of quantization flows in TFLM:

  • None. (inputs and outputs are floating points)
  • Hybrid (one of the inputs is of floating point, and quantized for others)
  • Offline (everything is quantized, including activation. Activation ranged determined offline, see the API and the Colab)

Hybrid quantization seems broken atm. uTensor-cli require additional work to support offline quantization. So, no quantization is the way to go.

Useful Docs:

next: op code
quantization param for intermediate tensors?
- layout the test mechanism
- making third_party/tflite all importable
- modified test graph
* the input tensor not showing up in the print-graph
- included flatbuffers python in utensor_cgen/third_party/flatbuffers
- added file_identifier : TFL3
- changed schema version field to : 3
- bugfix in the confteest.py
test_write.py WIP
hybrid quantization failed
- add op
the first tensor still fails
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants