tensorflow/lite/g3doc/api_docs/java/org/tensorflow/lite/gpu/GpuDelegate.Options.html
public static class GpuDelegate.Options
This class is deprecated.
Use GpuDelegateFactory.Options instead.
Inherits from GpuDelegateFactory.Options for compatibility with existing code.
From class org.tensorflow.lite.gpu.GpuDelegateFactory.Options
| int | INFERENCE_PREFERENCE_FAST_SINGLE_ANSWER | Delegate will be used only once, therefore, bootstrap/init time should be taken into account. | | int | INFERENCE_PREFERENCE_SUSTAINED_SPEED | Prefer maximizing the throughput. |
| | Options() |
From class org.tensorflow.lite.gpu.GpuDelegateFactory.Options
| boolean | areQuantizedModelsAllowed() | | GpuDelegateFactory.Options.GpuBackend | getForceBackend() | | int | getInferencePreference() | | String | getModelToken() | | String | getSerializationDir() | | boolean | isPrecisionLossAllowed() | | GpuDelegateFactory.Options | setForceBackend(GpuDelegateFactory.Options.GpuBackend forceBackend) Sets the GPU Backend.
| | GpuDelegateFactory.Options | setInferencePreference(int preference) Sets the inference preference for precision/compilation/runtime tradeoffs.
| | GpuDelegateFactory.Options | setPrecisionLossAllowed(boolean precisionLossAllowed) Sets whether precision loss is allowed.
| | GpuDelegateFactory.Options | setQuantizedModelsAllowed(boolean quantizedModelsAllowed) Enables running quantized models with the delegate.
| | GpuDelegateFactory.Options | setSerializationParams(String serializationDir, String modelToken) Enables serialization on the delegate.
|
From class java.lang.Object
| boolean | equals(Object arg0) | | final Class<?> | getClass() | | int | hashCode() | | final void | notify() | | final void | notifyAll() | | String | toString() | | final void | wait(long arg0, int arg1) | | final void | wait(long arg0) | | final void | wait() |