tensorflow/lite/g3doc/guide/python.md
Using TensorFlow Lite with Python is great for embedded devices based on Linux, such as Raspberry Pi{:.external} and Coral devices with Edge TPU{:.external}, among many others.
This page shows how you can start running TensorFlow Lite models with Python in just a few minutes. All you need is a TensorFlow model converted to TensorFlow Lite. (If you don't have a model converted yet, you can experiment using the model provided with the example linked below.)
To quickly start executing TensorFlow Lite models with Python, you can install
just the TensorFlow Lite interpreter, instead of all TensorFlow packages. We
call this simplified Python package tflite_runtime.
The tflite_runtime package is a fraction the size of the full tensorflow
package and includes the bare minimum code required to run inferences with
TensorFlow Lite—primarily the
Interpreter
Python class. This small package is ideal when all you want to do is execute
.tflite models and avoid wasting disk space with the large TensorFlow library.
Note: If you need access to other Python APIs, such as the
TensorFlow Lite Converter, you must install the
full TensorFlow package.
For example, the [Select TF ops]
(https://www.tensorflow.org/lite/guide/ops_select) are not included in the
tflite_runtime package. If your models have any dependencies to the Select TF
ops, you need to use the full TensorFlow package instead.
You can install on Linux with pip:
<pre class="devsite-terminal devsite-click-to-copy"> python3 -m pip install tflite-runtime </pre>The tflite-runtime Python wheels are pre-built and provided for these
platforms:
If you want to run TensorFlow Lite models on other platforms, you should either use the full TensorFlow package, or build the tflite-runtime package from source.
If you're using TensorFlow with the Coral Edge TPU, you should instead follow the appropriate Coral setup documentation.
Note: We no longer update the Debian package python3-tflite-runtime. The
latest Debian package is for TF version 2.5, which you can install by following
these older instructions.
Note: We no longer release pre-built tflite-runtime wheels for Windows and
macOS. For these platforms, you should use the
full TensorFlow package, or
build the tflite-runtime package from source.
Instead of importing Interpreter from the tensorflow module, you now need to
import it from tflite_runtime.
For example, after you install the package above, copy and run the
label_image.py
file. It will (probably) fail because you don't have the tensorflow library
installed. To fix it, edit this line of the file:
import tensorflow as tf
So it instead reads:
import tflite_runtime.interpreter as tflite
And then change this line:
interpreter = tf.lite.Interpreter(model_path=args.model_file)
So it reads:
interpreter = tflite.Interpreter(model_path=args.model_file)
Now run label_image.py again. That's it! You're now executing TensorFlow Lite
models.
For more details about the Interpreter API, read
Load and run a model in Python.
If you have a Raspberry Pi, check out a video series about how to run object detection on Raspberry Pi using TensorFlow Lite.
If you're using a Coral ML accelerator, check out the Coral examples on GitHub.
To convert other TensorFlow models to TensorFlow Lite, read about the TensorFlow Lite Converter.
If you want to build tflite_runtime wheel, read
Build TensorFlow Lite Python Wheel Package