Skip to content

TFRT Frequently Asked Questions

Jeremy Lau edited this page Apr 28, 2020 · 2 revisions

TFRT Frequently Asked Questions

When can I contribute to the TFRT codebase?

Because TFRT is still an early stage project, we will not be accepting PRs initially. However, we still encourage community participation in the form of bug reports, feature requests, and design and requirements discussions. See the contact section for more details.

I want to use TFRT in my training and inference workloads; how will that work?

TFRT is still an early stage project and is not yet integrated with TensorFlow. Once it is, TFRT will be (mostly) invisible to the end user — infra should just work. Initially, you will be able to run training and inference workloads with TFRT via an opt-in flag. One we’ve addressed known issues, stress-tested at scale and fine-tuned performance, TFRT will be available by default.

What hardware devices will TFRT support?

TFRT is built to make it easy to plug in new devices of various types. Initially, it will support CPUs, GPUs, and TPUs of different flavors. Eventually it will support a variety of other devices that hardware developers want to integrate with TensorFlow. Note that the first release only includes CPU (and TPU support will remain Google-only), but the GPU build will be made available shortly.

I’m starting a new hardware initiative; how can I integrate with TFRT-based TensorFlow?

TFRT is not quite ready to support the addition of new hardware. However, if you or your team have an interest in adding new hardware to TensorFlow, please reach out to us via the TFRT mailing list.

I want to add an op; how can I do this within TFRT-based TensorFlow?

We are still defining the “adding an op” story and will share more in time. Adding ops is one of the key use cases we are considering in TFRT’s modularity and extensibility requirements. To stay informed of our progress, please subscribe to the TFRT mailing list.

I use TFLite; will TFRT support mobile workloads?

TFRT does not currently support mobile deployments. We will share more once we have a more clearly defined mobile plan. Please subscribe to the TFRT mailing list to stay informed.

What OS platforms will TFRT support?

TFRT currently supports Linux/Ubuntu, but will eventually support all TF platforms.

What’s your plan for the current runtime?

We will continue to prioritize fixing critical bugs, as well as making selective enhancements to the current stack to fulfill short-term user needs.

Where can I learn more about TFRT?

To learn more about TFRT’s early progress and wins, check out our Dev Summit presentation where we provided a performance benchmark for small-batch GPU inference on ResNet-50, and our deep dive presentation where we provided a detailed overview of TFRT’s core components, low-level abstractions, and general design principles. Also, the TFRT announcement blog post gives a nice introduction to the new runtime.

How can I get in contact with the TFRT team?

For general discussions about the runtime (e.g. design and requirements), please subscribe to the TFRT mailing list. The mailing list will also be used for all public announcements, so it’s a good way to stay informed of TFRT’s progress. For bugs and other feature requests, use GitHub issues.