tensorflow/lite/g3doc/android/java.md
TensorFlow Lite in Google Play services can also be accessed using Java APIs, in addition to the Native API. In particular, TensorFlow Lite in Google Play services is available through the TensorFlow Lite Task API and the TensorFlow Lite Interpreter API. The Task Library provides optimized out-of-the-box model interfaces for common machine learning tasks using visual, audio, and text data. The TensorFlow Lite Interpreter API, provided by the TensorFlow runtime, provides a more general-purpose interface for building and running ML models.
The following sections provide instructions on how to use the Interpreter and Task Library APIs with TensorFlow Lite in Google Play services. While it is possible for an app to use both the Interpreter APIs and Task Library APIs, most apps should only use one set of APIs.
The TensorFlow Lite Task API wraps the Interpreter API and provides a high-level programming interface for common machine learning tasks that use visual, audio, and text data. You should use the Task API if your application requires one of the supported tasks.
Your project dependency depends on your machine learning use case. The Task APIs contain the following libraries:
org.tensorflow:tensorflow-lite-task-vision-play-servicesorg.tensorflow:tensorflow-lite-task-audio-play-servicesorg.tensorflow:tensorflow-lite-task-text-play-servicesAdd one of the dependencies to your app project code to access the Play services API for TensorFlow Lite. For example, use the following to implement a vision task:
dependencies {
...
implementation 'org.tensorflow:tensorflow-lite-task-vision-play-services:0.4.2'
...
}
Caution: The TensorFlow Lite Tasks Audio library version 0.4.2 maven repository
is incomplete. Use version 0.4.2.1 for this library instead:
org.tensorflow:tensorflow-lite-task-audio-play-services:0.4.2.1.
Initialize the TensorFlow Lite component of the Google Play services API before using the TensorFlow Lite APIs. The following example initializes the vision library:
<div> <devsite-selector> <section> <h3>Kotlin</h3> <pre class="prettyprint"> init { TfLiteVision.initialize(context) } </pre> </section> </devsite-selector> </div>Important: Make sure the TfLite.initialize task completes before executing
code that accesses TensorFlow Lite APIs.
Tip: The TensorFlow Lite modules are installed at the same time your application
is installed or updated from the Play Store. You can check the availability of
the modules by using ModuleInstallClient from the Google Play services APIs.
For more information on checking module availability, see
Ensuring API availability with ModuleInstallClient.
After initializing the TensorFlow Lite component, call the detect() method to
generate inferences. The exact code within the detect() method varies
depending on the library and use case. The following is for a simple object
detection use case with the TfLiteVision library:
if (objectDetector == null) { setupObjectDetector() }
...
} </pre> </section> </devsite-selector>
</div>Depending on the data format, you may also need to preprocess and convert your
data within the detect() method before generating inferences. For example,
image data for an object detector requires the following:
val imageProcessor = ImageProcessor.Builder().add(Rot90Op(-imageRotation / 90)).build()
val tensorImage = imageProcessor.process(TensorImage.fromBitmap(image))
val results = objectDetector?.detect(tensorImage)
The Interpreter APIs offer more control and flexibility than the Task Library APIs. You should use the Interpreter APIs if your machine learning task is not supported by the Task library, or if you require a more general-purpose interface for building and running ML models.
Add the following dependencies to your app project code to access the Play services API for TensorFlow Lite:
dependencies {
...
// Tensorflow Lite dependencies for Google Play services
implementation 'com.google.android.gms:play-services-tflite-java:16.4.0'
// Optional: include Tensorflow Lite Support Library
implementation 'com.google.android.gms:play-services-tflite-support:16.4.0'
...
}
Initialize the TensorFlow Lite component of the Google Play services API before using the TensorFlow Lite APIs:
<div> <devsite-selector> <section> <h3>Kotlin</h3> <pre class="prettyprint"> val initializeTask: Task<Void> by lazy { TfLite.initialize(this) } </pre> </section> <section> <h3>Java</h3> <pre class="prettyprint"> Task<Void> initializeTask = TfLite.initialize(context); </pre> </section> </devsite-selector> </div>Note: Make sure the TfLite.initialize task completes before executing code
that accesses TensorFlow Lite APIs. Use the addOnSuccessListener() method, as
shown in the next section.
Create an interpreter using InterpreterApi.create() and configure it to use
Google Play services runtime, by calling InterpreterApi.Options.setRuntime(),
as shown in the following example code:
You should use the implementation above because it avoids blocking the Android
user interface thread. If you need to manage thread execution more closely, you
can add a Tasks.await() call to interpreter creation:
Warning: Do not call .await() on the foreground user interface thread because
it interrupts display of user interface elements and creates a poor user
experience.
Using the interpreter object you created, call the run() method to generate
an inference.
TensorFlow Lite allows you to accelerate the performance of your model using specialized hardware processors, such as graphics processing units (GPUs). You can take advantage of these specialized processors using hardware drivers called delegates. You can use the following hardware acceleration delegates with TensorFlow Lite in Google Play services:
GPU delegate (recommended) - This delegate is provided through Google Play services and is dynamically loaded, just like the Play services versions of the Task API and Interpreter API.
NNAPI delegate - This delegate is available as an included library dependency in your Android development project, and is bundled into your app.
For more information about hardware acceleration with TensorFlow Lite, see the TensorFlow Lite Delegates page.
Not all devices support GPU hardware acceleration with TFLite. In order to
mitigate errors and potential crashes, use the
TfLiteGpu.isGpuDelegateAvailable method to check whether a device is
compatible with the GPU delegate.
Use this method to confirm whether a device is compatible with GPU, and use CPU or the NNAPI delegate as a fallback for when GPU is not supported.
useGpuTask = TfLiteGpu.isGpuDelegateAvailable(context)
Once you have a variable like useGpuTask, you can use it to determine whether
devices use the GPU delegate. The following examples show how this can be done
with both the Task Library and Interpreter APIs.
With the Task Api
<div> <devsite-selector> <section> <h3>Kotlin</h3> <pre class="prettyprint"> lateinit val optionsTask = useGpuTask.continueWith { task -> val baseOptionsBuilder = BaseOptions.builder() if (task.result) { baseOptionsBuilder.useGpu() } ObjectDetectorOptions.builder() .setBaseOptions(baseOptionsBuilder.build()) .setMaxResults(1) .build() } </pre> </section> <section> <h3>Java</h3> <pre class="prettyprint"> Task<ObjectDetectorOptions> optionsTask = useGpuTask.continueWith({ task -> BaseOptions baseOptionsBuilder = BaseOptions.builder(); if (task.getResult()) { baseOptionsBuilder.useGpu(); } return ObjectDetectorOptions.builder() .setBaseOptions(baseOptionsBuilder.build()) .setMaxResults(1) .build() }); </pre> </section> </devsite-selector> </div>With the Interpreter Api
<div> <devsite-selector> <section> <h3>Kotlin</h3> <pre class="prettyprint"> val interpreterTask = useGpuTask.continueWith { task -> val interpreterOptions = InterpreterApi.Options() .setRuntime(TfLiteRuntime.FROM_SYSTEM_ONLY) if (task.result) { interpreterOptions.addDelegateFactory(GpuDelegateFactory()) } InterpreterApi.create(FileUtil.loadMappedFile(context, MODEL_PATH), interpreterOptions) } </pre> </section> <section> <h3>Java</h3> <pre class="prettyprint"> Task<InterpreterApi.Options> interpreterOptionsTask = useGpuTask.continueWith({ task -> InterpreterApi.Options options = new InterpreterApi.Options().setRuntime(TfLiteRuntime.FROM_SYSTEM_ONLY); if (task.getResult()) { options.addDelegateFactory(new GpuDelegateFactory()); } return options; }); </pre> </section> </devsite-selector> </div>To use the GPU delegate with the Task APIs:
Update the project dependencies to use the GPU delegate from Play services:
implementation 'com.google.android.gms:play-services-tflite-gpu:16.4.0'
Initialize the GPU delegate with setEnableGpuDelegateSupport. For example,
you can initialize the GPU delegate for TfLiteVision with the following:
Enable the GPU delegate option with
BaseOptions:
Configure the options using .setBaseOptions. For example, you can set up
GPU in ObjectDetector with the following:
To use the GPU delegate with the Interpreter APIs:
Update the project dependencies to use the GPU delegate from Play services:
implementation 'com.google.android.gms:play-services-tflite-gpu:16.4.0'
Enable the GPU delegate option in the TFlite initialization:
<div> <devsite-selector> <section> <h3>Kotlin</h3> <pre class="prettyprint"> TfLite.initialize(context, TfLiteInitializationOptions.builder() .setEnableGpuDelegateSupport(true) .build()) </pre> </section> <section> <h3>Java</h3> <pre class="prettyprint"> TfLite.initialize(context, TfLiteInitializationOptions.builder() .setEnableGpuDelegateSupport(true) .build()); </pre> </section> </devsite-selector> </div>Enable GPU delegate in the interpreter options: set the delegate factory to
GpuDelegateFactory by calling addDelegateFactory() withinInterpreterApi.Options()`:
If you are planning to migrate your app from stand-alone TensorFlow Lite to the Play services API, review the following additional guidance for updating your app project code:
new Interpreter object creation in your code,
and modify each one so that it uses the InterpreterApi.create() call. The
new TfLite.initialize is asynchronous, which means in most cases it's not a
drop-in replacement: you must register a listener for when the call
completes. Refer to the code snippet in Step 3 code.import org.tensorflow.lite.InterpreterApi; and import org.tensorflow.lite.InterpreterApi.Options.TfLiteRuntime; to any source
files using the org.tensorflow.lite.Interpreter or
org.tensorflow.lite.InterpreterApi classes.InterpreterApi.create() have only a
single argument, append new InterpreterApi.Options() to the argument list..setRuntime(TfLiteRuntime.FROM_SYSTEM_ONLY) to the last argument of
any calls to InterpreterApi.create().org.tensorflow.lite.Interpreter class
with org.tensorflow.lite.InterpreterApi.If you want to use stand-alone TensorFlow Lite and the Play services API side-by-side, you must use TensorFlow Lite 2.9 (or later). TensorFlow Lite 2.8 and earlier versions are not compatible with the Play services API version.