tensorflow/lite/g3doc/api_docs/java/org/tensorflow/lite/gpu/GpuDelegateFactory.Options.html
public static class GpuDelegateFactory.Options
| Known Direct Subclasses
| GpuDelegate.Options | This class is deprecated. Use GpuDelegateFactory.Options instead. |
|
Delegate options.
| enum | GpuDelegateFactory.Options.GpuBackend | Which GPU backend to select. |
| int | INFERENCE_PREFERENCE_FAST_SINGLE_ANSWER | Delegate will be used only once, therefore, bootstrap/init time should be taken into account. | | int | INFERENCE_PREFERENCE_SUSTAINED_SPEED | Prefer maximizing the throughput. |
| | Options() |
| boolean | areQuantizedModelsAllowed() | | GpuDelegateFactory.Options.GpuBackend | getForceBackend() | | int | getInferencePreference() | | String | getModelToken() | | String | getSerializationDir() | | boolean | isPrecisionLossAllowed() | | GpuDelegateFactory.Options | setForceBackend(GpuDelegateFactory.Options.GpuBackend forceBackend) Sets the GPU Backend.
| | GpuDelegateFactory.Options | setInferencePreference(int preference) Sets the inference preference for precision/compilation/runtime tradeoffs.
| | GpuDelegateFactory.Options | setPrecisionLossAllowed(boolean precisionLossAllowed) Sets whether precision loss is allowed.
| | GpuDelegateFactory.Options | setQuantizedModelsAllowed(boolean quantizedModelsAllowed) Enables running quantized models with the delegate.
| | GpuDelegateFactory.Options | setSerializationParams(String serializationDir, String modelToken) Enables serialization on the delegate.
|
From class java.lang.Object
| boolean | equals(Object arg0) | | final Class<?> | getClass() | | int | hashCode() | | final void | notify() | | final void | notifyAll() | | String | toString() | | final void | wait(long arg0, int arg1) | | final void | wait(long arg0) | | final void | wait() |
Delegate will be used only once, therefore, bootstrap/init time should be taken into account.
Constant Value: 0
Prefer maximizing the throughput. Same delegate will be used repeatedly on multiple inputs.
Constant Value: 1
Sets the GPU Backend.
| forceBackend | |
Sets the inference preference for precision/compilation/runtime tradeoffs.
| preference | One of INFERENCE_PREFERENCE_FAST_SINGLE_ANSWER (default), INFERENCE_PREFERENCE_SUSTAINED_SPEED. |
Sets whether precision loss is allowed.
| precisionLossAllowed | When true (default), the GPU may quantify tensors, downcast values, process in FP16. When false, computations are carried out in 32-bit floating point. |
Enables running quantized models with the delegate.
WARNING: This is an experimental API and subject to change.
| quantizedModelsAllowed | When true (default), the GPU may run quantized models. |
Enables serialization on the delegate. Note non-null serializationDir and modelToken are required for serialization.
WARNING: This is an experimental API and subject to change.
| serializationDir | The directory to use for storing data. Caller is responsible to ensure the model is not stored in a public directory. It's recommended to use Context.getCodeCacheDir() to provide a private location for the application on Android. |
| modelToken | The token to be used to identify the model. Caller is responsible to ensure the token is unique to the model graph and data. |