tensorflow/lite/g3doc/api_docs/java/org/tensorflow/lite/support/common/ops/DequantizeOp.html
public class DequantizeOp
Dequantizes a TensorBuffer with given zeroPoint and scale.
Note: The data type of output tensor is always FLOAT32 except when the DequantizeOp is created effectively as an identity Op such as setting zeroPoint to 0 and scale to 1 (in this case, the output tensor is the same instance as input).
If both zeroPoint and scale are 0, the DequantizeOp will be bypassed, which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful when passing in the quantization parameters that are extracted directly from the TFLite model flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read as 0.
| | DequantizeOp(float zeroPoint, float scale) |
From class org.tensorflow.lite.support.common.ops.NormalizeOp
| TensorBuffer | apply(TensorBuffer input) Applies the defined normalization on given tensor and returns the result.
|
From class java.lang.Object
| boolean | equals(Object arg0) | | final Class<?> | getClass() | | int | hashCode() | | final void | notify() | | final void | notifyAll() | | String | toString() | | final void | wait(long arg0, int arg1) | | final void | wait(long arg0) | | final void | wait() |
From interface org.tensorflow.lite.support.common.TensorOperator
| abstract TensorBuffer | apply(TensorBuffer input) |
From interface org.tensorflow.lite.support.common.Operator
| abstract TensorBuffer | apply(TensorBuffer x) Applies an operation on a T object, returning a T object.
|
| zeroPoint | | | scale | |