-
Notifications
You must be signed in to change notification settings - Fork 123
TFRT Frequently Asked Questions
Because TFRT is still an early stage project, we will not be accepting PRs initially. However, we still encourage community participation in the form of bug reports, feature requests, and design and requirements discussions. See the contact section for more details.
TFRT is still an early stage project and is not yet integrated with TensorFlow. Once it is, TFRT will be (mostly) invisible to the end user — infra should just work. Initially, you will be able to run training and inference workloads with TFRT via an opt-in flag. One we’ve addressed known issues, stress-tested at scale and fine-tuned performance, TFRT will be available by default.
TFRT is built to make it easy to plug in new devices of various types. Initially, it will support CPUs, GPUs, and TPUs of different flavors. Eventually it will support a variety of other devices that hardware developers want to integrate with TensorFlow. Note that the first release only includes CPU (and TPU support will remain Google-only), but the GPU build will be made available shortly.
TFRT is not quite ready to support the addition of new hardware. However, if you or your team have an interest in adding new hardware to TensorFlow, please reach out to us via the TFRT mailing list.
We are still defining the “adding an op” story and will share more in time. Adding ops is one of the key use cases we are considering in TFRT’s modularity and extensibility requirements. To stay informed of our progress, please subscribe to the TFRT mailing list.
TFRT does not currently support mobile deployments. We will share more once we have a more clearly defined mobile plan. Please subscribe to the TFRT mailing list to stay informed.
TFRT currently supports Linux/Ubuntu, but will eventually support all TF platforms.
We will continue to prioritize fixing critical bugs, as well as making selective enhancements to the current stack to fulfill short-term user needs.
To learn more about TFRT’s early progress and wins, check out our Dev Summit presentation where we provided a performance benchmark for small-batch GPU inference on ResNet-50, and our deep dive presentation where we provided a detailed overview of TFRT’s core components, low-level abstractions, and general design principles. Also, the TFRT announcement blog post gives a nice introduction to the new runtime.
For general discussions about the runtime (e.g. design and requirements), please subscribe to the TFRT mailing list. The mailing list will also be used for all public announcements, so it’s a good way to stay informed of TFRT’s progress. For bugs and other feature requests, use GitHub issues.