This is a non-exhaustive summary of changes to the Edge TPU library, compiler, and runtime.
For pre-built downloads, see coral.ai/software.
- The Edge TPU runtime now supports PCIe interface on 64-bit Windows 10 systems (x86-64)
- The Edge TPU compiler includes bug fixes and the version number now aligns with our Debian package versions
Source code update, Edge TPU Python library 2.14.0 and Edge TPU compiler 2.1.302470888 (March 2020 "eel")
- C++ support for model pipelining
- Update Edge TPU python library to 2.14.0. New
size
parameter forrun_inference
- Compiler update (new compiler options
num_segments
andintermediate_tensors
) - EfficientNet embedding extractor available for on-device learning with backprop
- Bug fixes
- Added support for ops: ExpandDims, Pack, Quantize, Sum, and TransposeConv
- First runtime release made available for Mac and Windows (USB interface only) — the compiler is still Linux only
- Updated the build based on the latest TensorFlow source (no API changes)
- First release made available for Mac and Windows
- Performance optimizations
- Python API signature style changes.
- New
read_label_file()
utility function.
- Improved support for models built with full integer post-training quantization—especially those built with the Keras API.
- Added support for the DeepLab v3 semantic segmentation model.
- Still compatible with Edge TPU runtime v12.
- New
SoftmaxRegression
API that allows you to perform transfer learning with on-device backpropagation. - Re-implementation of the
ImprintingEngine
API so you can keep the pre-trained classes from the provided model, retrain existing classes without starting from scratch, and immediately perform inference without exporting to a new.tflite
file. For details, see Retrain a classification model on-device with weight imprinting.
- Added support in
libedgetpu.so
for the TensorFlow Lite delegate API, allowing you to perform inferences directly from the TensorFlow Lite Python API (instead of using the Edge TPU Python API). For instructions, see Run inference with TensorFlow Lite in Python. - Added support for models built with full integer post-training quantization.
- Added support for new EfficientNet-EdgeTPU models.
- New offline Edge TPU Compiler.
- New
edgetpu.h
C++ API to perform inferencing on the Edge TPU, using the TensorFlow Lite C++ API. - You can now run multiple models with multiple Edge TPUs.
- New
edgetpu.utils.image_processing
APIs to process images before running an inference.