Back to Tensorflow

Interpreter.Options

tensorflow/lite/g3doc/api_docs/java/org/tensorflow/lite/Interpreter.Options.html

2.21.013.8 KB
Original Source

public static class Interpreter.Options

An options class for controlling runtime interpreter behavior.

Public Constructors

| | Options() | | | Options(InterpreterApi.Options options) |

Public Methods

| Interpreter.Options | addDelegate(Delegate delegate) Adds a Delegate to be applied during interpreter creation.

| | Interpreter.Options | addDelegateFactory(DelegateFactory delegateFactory) Adds a DelegateFactory which will be invoked to apply its created Delegate during interpreter creation.

| | Interpreter.Options | setAllowBufferHandleOutput(boolean allow) Advanced: Set if buffer handle output is allowed.

| | Interpreter.Options | setAllowFp16PrecisionForFp32(boolean allow) This method is deprecated. Prefer using NnApiDelegate.Options#setAllowFp16(boolean enable).

| | Interpreter.Options | setCancellable(boolean allow) Advanced: Set if the interpreter is able to be cancelled.

| | Interpreter.Options | setNumThreads(int numThreads) Sets the number of threads to be used for ops that support multi-threading.

| | Interpreter.Options | setRuntime(InterpreterApi.Options.TfLiteRuntime runtime) Specify where to get the TF Lite runtime implementation from.

| | Interpreter.Options | setUseNNAPI(boolean useNNAPI) Sets whether to use NN API (if available) for op execution.

| | Interpreter.Options | setUseXNNPACK(boolean useXNNPACK) Enable or disable an optimized set of CPU kernels (provided by XNNPACK).

|

Inherited Methods

From class org.tensorflow.lite.InterpreterApi.Options

| InterpreterApi.Options | addDelegate(Delegate delegate) Adds a Delegate to be applied during interpreter creation.

| | InterpreterApi.Options | addDelegateFactory(DelegateFactory delegateFactory) Adds a DelegateFactory which will be invoked to apply its created Delegate during interpreter creation.

| | ValidatedAccelerationConfig | getAccelerationConfig() Return the acceleration configuration.

| | List<DelegateFactory> | getDelegateFactories() Returns the list of delegate factories that have been registered via addDelegateFactory).

| | List<Delegate> | getDelegates() Returns the list of delegates intended to be applied during interpreter creation that have been registered via addDelegate.

| | int | getNumThreads() Returns the number of threads to be used for ops that support multi-threading.

| | InterpreterApi.Options.TfLiteRuntime | getRuntime() Return where to get the TF Lite runtime implementation from.

| | boolean | getUseNNAPI() Returns whether to use NN API (if available) for op execution.

| | boolean | getUseXNNPACK() | | boolean | isCancellable() Advanced: Returns whether the interpreter is able to be cancelled.

| | InterpreterApi.Options | setAccelerationConfig(ValidatedAccelerationConfig config) Specify the acceleration configuration.

| | InterpreterApi.Options | setCancellable(boolean allow) Advanced: Set if the interpreter is able to be cancelled.

| | InterpreterApi.Options | setNumThreads(int numThreads) Sets the number of threads to be used for ops that support multi-threading.

| | InterpreterApi.Options | setRuntime(InterpreterApi.Options.TfLiteRuntime runtime) Specify where to get the TF Lite runtime implementation from.

| | InterpreterApi.Options | setUseNNAPI(boolean useNNAPI) Sets whether to use NN API (if available) for op execution.

| | InterpreterApi.Options | setUseXNNPACK(boolean useXNNPACK) Enable or disable an optimized set of CPU kernels (provided by XNNPACK).

|

From class java.lang.Object

| boolean | equals(Object arg0) | | final Class<?> | getClass() | | int | hashCode() | | final void | notify() | | final void | notifyAll() | | String | toString() | | final void | wait(long arg0, int arg1) | | final void | wait(long arg0) | | final void | wait() |

Public Constructors

public Options ()

public Options (InterpreterApi.Options options)

Parameters

| options | |

Public Methods

public Interpreter.Options addDelegate (Delegate delegate)

Adds a Delegate to be applied during interpreter creation.

Delegates added here are applied before any delegates created from a DelegateFactory that was added with addDelegateFactory(DelegateFactory).

Note that TF Lite in Google Play Services (see setRuntime(InterpreterApi.Options.TfLiteRuntime)) does not support external (developer-provided) delegates, and adding a Delegate other than ERROR(/NnApiDelegate) here is not allowed when using TF Lite in Google Play Services.

Parameters

| delegate | |

public Interpreter.Options addDelegateFactory (DelegateFactory delegateFactory)

Adds a DelegateFactory which will be invoked to apply its created Delegate during interpreter creation.

Delegates from a delegated factory that was added here are applied after any delegates added with addDelegate(Delegate).

Parameters

| delegateFactory | |

public Interpreter.Options setAllowBufferHandleOutput (boolean allow)

Advanced: Set if buffer handle output is allowed.

When a Delegate supports hardware acceleration, the interpreter will make the data of output tensors available in the CPU-allocated tensor buffers by default. If the client can consume the buffer handle directly (e.g. reading output from OpenGL texture), it can set this flag to false, avoiding the copy of data to the CPU buffer. The delegate documentation should indicate whether this is supported and how it can be used.

WARNING: This is an experimental interface that is subject to change.

Parameters

| allow | |

public Interpreter.Options setAllowFp16PrecisionForFp32 (boolean allow)

This method is deprecated.
Prefer using NnApiDelegate.Options#setAllowFp16(boolean enable).

Sets whether to allow float16 precision for FP32 calculation when possible. Defaults to false (disallow).

Parameters

| allow | |

public Interpreter.Options setCancellable (boolean allow)

Advanced: Set if the interpreter is able to be cancelled.

Interpreters may have an experimental API setCancelled(boolean). If this interpreter is cancellable and such a method is invoked, a cancellation flag will be set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state until explicitly "uncancelled" by setCancelled(false).

Parameters

| allow | |

public Interpreter.Options setNumThreads (int numThreads)

Sets the number of threads to be used for ops that support multi-threading.

numThreads should be &gt;= -1. Setting numThreads to 0 has the effect of disabling multithreading, which is equivalent to setting numThreads to 1. If unspecified, or set to the value -1, the number of threads used will be implementation-defined and platform-dependent.

Parameters

| numThreads | |

public Interpreter.Options setRuntime (InterpreterApi.Options.TfLiteRuntime runtime)

Specify where to get the TF Lite runtime implementation from.

Parameters

| runtime | |

public Interpreter.Options setUseNNAPI (boolean useNNAPI)

Sets whether to use NN API (if available) for op execution. Defaults to false (disabled).

Parameters

| useNNAPI | |

public Interpreter.Options setUseXNNPACK (boolean useXNNPACK)

Enable or disable an optimized set of CPU kernels (provided by XNNPACK). Enabled by default.

Parameters

| useXNNPACK | |