tensorflow/lite/g3doc/api_docs/c/group/common.html
This file defines common C types and APIs for implementing operations, delegates and other constructs in TensorFlow Lite.
The actual operations and delegates can be defined using C++, but the interface between the interpreter and the operations are C.
Summary of abstractions:
TF_LITE_ENSURE - self-sufficient error checkingTfLiteStatus - status reportingTfLiteIntArray - stores tensor shapes (dims),TfLiteContext - allows an op to access the tensorsTfLiteTensor - tensor (a multidimensional array)TfLiteNode - a single node or operationTfLiteRegistration - the implementation of a conceptual operation.TfLiteDelegate - allows delegation of nodes to alternative backends.Some abstractions in this file are created and managed by Interpreter.
NOTE: The order of values in these structs are "semi-ABI stable". New values should be added only to the end of structs and never reordered.
|
|
| --- |
| Anonymous Enum 0 | enum |
| TfLiteAllocationStrategy{ kTfLiteAllocationStrategyMMap, kTfLiteAllocationStrategyArena, kTfLiteAllocationStrategyMalloc, kTfLiteAllocationStrategyNew} | enum
Memory allocation strategies.
|
| TfLiteAllocationType | enum
Memory allocation strategies.
|
| TfLiteCustomAllocationFlags{ kTfLiteCustomAllocationFlagsSkipAlignCheck = 1} | enum
The flags used in Interpreter::SetCustomAllocationForTensor.
|
| TfLiteDelegateFlags{ kTfLiteDelegateFlagsAllowDynamicTensors = 1, kTfLiteDelegateFlagsRequirePropagatedShapes = 2, kTfLiteDelegateFlagsPerOperatorProfiling = 4} | enum
The flags used in TfLiteDelegate.
|
| TfLiteDimensionType | enum
Storage format of each dimension in a sparse tensor.
|
| TfLiteExternalContextType{ kTfLiteGemmLowpContext = 1, kTfLiteEdgeTpuContext = 2, kTfLiteCpuBackendContext = 3, kTfLiteMaxExternalContexts = 4} | enum
The list of external context types known to TF Lite.
|
| TfLiteInPlaceOp{ kTfLiteInplaceOpNone = 0, kTfLiteInplaceOpDataUnmodified = 1, kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput = 2, kTfLiteInplaceOpInput0Shared = 4, kTfLiteInplaceOpInput1Shared = 8, kTfLiteInplaceOpInput2Shared = 16, kTfLiteInplaceOpMaxValue = UINT64_MAX} | enum
The valid values of the inplace_operator field in TfLiteRegistration.
|
| TfLiteQuantizationType{ kTfLiteNoQuantization = 0, kTfLiteAffineQuantization = 1} | enum
SupportedQuantizationTypes.
|
| TfLiteRunStability{ kTfLiteRunStabilitySingleRun, kTfLiteRunStabilityAcrossRuns} | enum
Describes how stable a tensor attribute is with regards to an interpreter runs.
|
| TfLiteRunStep | enum
Describes the steps of a TFLite operation life cycle.
|
|
|
| --- |
| TfLiteAffineQuantization | typedef
struct TfLiteAffineQuantization
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
|
| TfLiteAllocationStrategy | typedef
enum TfLiteAllocationStrategy
Memory allocation strategies.
|
| TfLiteAllocationType | typedef
enum TfLiteAllocationType
Memory allocation strategies.
|
| TfLiteBufferHandle | typedef
int
The delegates should use zero or positive integers to represent handles.
|
| TfLiteComplex128 | typedef
struct TfLiteComplex128
Double-precision complex data type compatible with the C99 definition.
|
| TfLiteComplex64 | typedef
struct TfLiteComplex64
Single-precision complex data type compatible with the C99 definition.
|
| TfLiteContext | typedef
struct TfLiteContext
TfLiteContext allows an op to access the tensors.
|
| TfLiteCustomAllocation | typedef
struct TfLiteCustomAllocation
Defines a custom memory allocation not owned by the runtime.
|
| TfLiteCustomAllocationFlags | typedef
enum TfLiteCustomAllocationFlags
The flags used in Interpreter::SetCustomAllocationForTensor.
|
| TfLiteDelegate | typedef
struct TfLiteDelegate
WARNING: This is an experimental interface that is subject to change.
|
| TfLiteDelegateFlags | typedef
enum TfLiteDelegateFlags
The flags used in TfLiteDelegate.
|
| TfLiteDelegateParams | typedef
struct TfLiteDelegateParams
WARNING: This is an experimental interface that is subject to change.
|
| TfLiteDimensionMetadata | typedef
struct TfLiteDimensionMetadata
Metadata to encode each dimension in a sparse tensor.
|
| TfLiteDimensionType | typedef
enum TfLiteDimensionType
Storage format of each dimension in a sparse tensor.
|
| TfLiteEvalTensor | typedef
struct TfLiteEvalTensor
Light-weight tensor struct for TF Micro runtime.
|
| TfLiteExternalContext | typedef
struct TfLiteExternalContext
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
|
| TfLiteExternalContextType | typedef
enum TfLiteExternalContextType
The list of external context types known to TF Lite.
|
| TfLiteFloat16 | typedef
struct TfLiteFloat16
Half precision data type compatible with the C99 definition.
|
| TfLiteFloatArray | typedef
struct TfLiteFloatArray
Fixed size list of floats. Used for per-channel quantization.
|
| TfLiteIntArray | typedef
struct TfLiteIntArray
Fixed size list of integers.
|
| TfLiteNode | typedef
struct TfLiteNode
A structure representing an instance of a node.
|
| TfLiteOpaqueDelegateBuilder | typedef
struct TfLiteOpaqueDelegateBuilder
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
|
| TfLiteOpaqueDelegateParams | typedef
struct TfLiteOpaqueDelegateParams
WARNING: This is an experimental interface that is subject to change.
|
| TfLitePtrUnion | typedef
union TfLitePtrUnion
A union of pointers that points to memory for a given tensor.
|
| TfLiteQuantization | typedef
struct TfLiteQuantization
Structure specifying the quantization used by the tensor, if-any.
|
| TfLiteQuantizationType | typedef
enum TfLiteQuantizationType
SupportedQuantizationTypes.
|
| TfLiteRegistration | typedef
struct TfLiteRegistration
TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).
|
| TfLiteRegistrationExternal | typedef
struct TfLiteRegistrationExternal
TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).
|
| TfLiteRegistration_V1 | typedef
struct TfLiteRegistration_V1
Old version of TfLiteRegistration to maintain binary backward compatibility.
|
| TfLiteRegistration_V2 | typedef
struct TfLiteRegistration_V2
Old version of TfLiteRegistration to maintain binary backward compatibility.
|
| TfLiteRegistration_V3 | typedef
struct TfLiteRegistration_V3
Old version of TfLiteRegistration to maintain binary backward compatibility.
|
| TfLiteRunStability | typedef
enum TfLiteRunStability
Describes how stable a tensor attribute is with regards to an interpreter runs.
|
| TfLiteRunStep | typedef
enum TfLiteRunStep
Describes the steps of a TFLite operation life cycle.
|
| TfLiteSparsity | typedef
struct TfLiteSparsity
Parameters used to encode a sparse tensor.
|
| TfLiteTensor | typedef
struct TfLiteTensor
A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined). |
|
|
| --- |
| kTfLiteMaxSharableOpInputs = 3 |
const int
The number of shareable inputs supported. |
|
|
| --- |
| TfLiteDelegateCreate(void) |
TfLiteDelegate
Build a null delegate, with all the fields properly set to their default values.
|
| TfLiteFloatArrayCopy(const TfLiteFloatArray *src) |
TfLiteFloatArray *
Create a copy of an array passed as src.
|
| TfLiteFloatArrayCreate(int size) |
TfLiteFloatArray *
Create a array of a given size (uninitialized entries).
|
| TfLiteFloatArrayFree(TfLiteFloatArray *a) |
void
Free memory of array a.
|
| TfLiteFloatArrayGetSizeInBytes(int size) |
int
Given the size (number of elements) in a TfLiteFloatArray, calculate its size in bytes.
|
| TfLiteIntArrayCopy(const TfLiteIntArray *src) |
TfLiteIntArray *
Create a copy of an array passed as src.
|
| TfLiteIntArrayCreate(int size) |
TfLiteIntArray *
Create a array of a given size (uninitialized entries).
|
| TfLiteIntArrayEqual(const TfLiteIntArray *a, const TfLiteIntArray *b) |
int
Check if two intarrays are equal. Returns 1 if they are equal, 0 otherwise.
|
| TfLiteIntArrayEqualsArray(const TfLiteIntArray *a, int b_size, const int b_data[]) |
int
Check if an intarray equals an array. Returns 1 if equals, 0 otherwise.
|
| TfLiteIntArrayFree(TfLiteIntArray *a) |
void
Free memory of array a.
|
| TfLiteIntArrayGetSizeInBytes(int size) |
size_t
Given the size (number of elements) in a TfLiteIntArray, calculate its size in bytes.
|
| TfLiteOpaqueDelegateCreate(const TfLiteOpaqueDelegateBuilder *opaque_delegate_builder) |
TfLiteOpaqueDelegate *
Creates an opaque delegate and returns its address.
|
| TfLiteOpaqueDelegateDelete(TfLiteOpaqueDelegate *delegate) |
void
Deletes the provided opaque delegate.
|
| TfLiteOpaqueDelegateGetData(const TfLiteOpaqueDelegate *delegate) |
void *
Returns a pointer to the data associated with the provided opaque delegate.
|
| TfLiteQuantizationFree(TfLiteQuantization *quantization) |
void
Free quantization data.
|
| TfLiteSparsityFree(TfLiteSparsity *sparsity) |
void
Free sparsity parameters.
|
| TfLiteTensorCopy(const TfLiteTensor *src, TfLiteTensor *dst) |
TfLiteStatus
Copies the contents of src in dst.
|
| TfLiteTensorDataFree(TfLiteTensor *t) |
void
Free data memory of tensor t.
|
| TfLiteTensorFree(TfLiteTensor *t) |
void
Free memory of tensor t.
|
| TfLiteTensorGetAllocationStrategy(const TfLiteTensor *t) |
TfLiteAllocationStrategy
Returns a tensor data allocation strategy.
|
| TfLiteTensorGetBufferAddressStability(const TfLiteTensor *t) |
TfLiteRunStability
Returns how stable a tensor data buffer address is across runs.
|
| TfLiteTensorGetDataKnownStep(const TfLiteTensor *t) |
TfLiteRunStep
Returns the operation step when the data of a tensor is populated.
|
| TfLiteTensorGetDataStability(const TfLiteTensor *t) |
TfLiteRunStability
Returns how stable a tensor data values are across runs.
|
| TfLiteTensorGetShapeKnownStep(const TfLiteTensor *t) |
TfLiteRunStep
Returns the operation steop when the shape of a tensor is computed.
|
| TfLiteTensorRealloc(size_t num_bytes, TfLiteTensor *tensor) |
TfLiteStatus
Change the size of the memory block owned by tensor to num_bytes.
|
| TfLiteTensorReset(TfLiteType type, const char *name, TfLiteIntArray *dims, TfLiteQuantizationParams quantization, char *buffer, size_t size, TfLiteAllocationType allocation_type, const void *allocation, bool is_variable, TfLiteTensor *tensor) |
void
Set all of a tensor's fields (and free any previously allocated data).
|
| TfLiteTensorResizeMaybeCopy(size_t num_bytes, TfLiteTensor *tensor, bool preserve_data) |
TfLiteStatus
Change the size of the memory block owned by tensor to num_bytes.
|
| TfLiteTypeGetName(TfLiteType type) |
const char *
Return the name of a given type, for error reporting purposes. |
|
| | --- | | TfLiteAffineQuantization |
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
| | TfLiteComplex128 |
Double-precision complex data type compatible with the C99 definition.
| | TfLiteComplex64 |
Single-precision complex data type compatible with the C99 definition.
| | TfLiteContext |
TfLiteContext allows an op to access the tensors.
| | TfLiteCustomAllocation |
Defines a custom memory allocation not owned by the runtime.
| | TfLiteDelegate |
WARNING: This is an experimental interface that is subject to change.
| | TfLiteDelegateParams |
WARNING: This is an experimental interface that is subject to change.
| | TfLiteDimensionMetadata |
Metadata to encode each dimension in a sparse tensor.
| | TfLiteEvalTensor |
Light-weight tensor struct for TF Micro runtime.
| | TfLiteExternalContext |
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
| | TfLiteFloat16 |
Half precision data type compatible with the C99 definition.
| | TfLiteFloatArray |
Fixed size list of floats. Used for per-channel quantization.
| | TfLiteIntArray |
Fixed size list of integers.
| | TfLiteNode |
A structure representing an instance of a node.
| | TfLiteOpaqueDelegateBuilder |
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
| | TfLiteOpaqueDelegateParams |
WARNING: This is an experimental interface that is subject to change.
| | TfLiteQuantization |
Structure specifying the quantization used by the tensor, if-any.
| | TfLiteRegistration |
TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).
| | TfLiteSparsity |
Parameters used to encode a sparse tensor.
| | TfLiteTensor |
A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined).
|
|
| | --- | | TfLitePtrUnion |
A union of pointers that points to memory for a given tensor.
|
Anonymous Enum 0
TfLiteAllocationStrategy
Memory allocation strategies.
TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.
| Properties |
|---|
kTfLiteAllocationStrategyArena |
Data is mmaped.
|
| kTfLiteAllocationStrategyMMap |
No data is allocated.
|
| kTfLiteAllocationStrategyMalloc |
Handled by the arena.
|
| kTfLiteAllocationStrategyNew |
Uses malloc/free.
Uses new[]/delete[].
|
TfLiteAllocationType
Memory allocation strategies.
kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.kTfLiteDynamic: Allocated during eval, or for string tensors.kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.TfLiteCustomAllocationFlags
The flags used in Interpreter::SetCustomAllocationForTensor.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
| Properties |
|---|
kTfLiteCustomAllocationFlagsSkipAlignCheck |
Skips checking whether allocation.data points to an aligned buffer as expected by the TFLite runtime.
NOTE: Setting this flag can cause crashes when calling Invoke(). Use with caution.
|
TfLiteDelegateFlags
The flags used in TfLiteDelegate.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
| Properties |
|---|
kTfLiteDelegateFlagsAllowDynamicTensors |
The flag is set if the delegate can handle dynamic sized tensors.
For example, the output shape of a Resize op with non-constant shape can only be inferred when the op is invoked. In this case, the Delegate is responsible for calling SetTensorToDynamic to mark the tensor as a dynamic tensor, and calling ResizeTensor when invoking the op.
If the delegate isn't capable to handle dynamic tensors, this flag need to be set to false.
|
| kTfLiteDelegateFlagsPerOperatorProfiling |
This flag can be used by delegates to request per-operator profiling.
If a node is a delegate node, this flag will be checked before profiling. If set, then the node will not be profiled. The delegate will then add per operator information using Profiler::EventType::OPERATOR_INVOKE_EVENT and the results will appear in the operator-wise Profiling section and not in the Delegate internal section.
|
| kTfLiteDelegateFlagsRequirePropagatedShapes |
This flag can be used by delegates (that allow dynamic tensors) to ensure applicable tensor shapes are automatically propagated in the case of tensor resizing.
This means that non-dynamic (allocation_type != kTfLiteDynamic) I/O tensors of a delegate kernel will have correct shapes before its Prepare() method is called. The runtime leverages TFLite builtin ops in the original execution plan to propagate shapes.
A few points to note:
WARNING: This feature is experimental and subject to change.
|
TfLiteDimensionType
Storage format of each dimension in a sparse tensor.
TfLiteExternalContextType
The list of external context types known to TF Lite.
This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.
| Properties |
|---|
kTfLiteCpuBackendContext |
Placeholder for Edge TPU support.
|
| kTfLiteEdgeTpuContext |
include gemm_support.h to use.
|
| kTfLiteGemmLowpContext |
include eigen_support.h to use.
|
| kTfLiteMaxExternalContexts |
include cpu_backend_context.h to use.
|
TfLiteInPlaceOp
The valid values of the inplace_operator field in TfLiteRegistration.
This allow an op to signal to the runtime that the same data pointer may be passed as an input and output without impacting the result. This does not mean that the memory can safely be reused, it is up to the runtime to determine this, e.g. if another op consumes the same input or not or if an input tensor has sufficient memory allocated to store the output data.
Setting these flags authorizes the runtime to set the data pointers of an input and output tensor to the same value. In such cases, the memory required by the output must be less than or equal to that required by the shared input, never greater. If kTfLiteInplaceOpDataUnmodified is set, then the runtime can share the same input tensor with multiple operator's outputs, provided that kTfLiteInplaceOpDataUnmodified is set for all of them. Otherwise, if an input tensor is consumed by multiple operators, it may only be shared with the operator which is the last to consume it.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
| Properties |
|---|
kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput |
Setting kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput means that InputN may be shared with OutputN instead of with the first output.
This flag requires one or more of kTfLiteInplaceOpInputNShared to be set.
|
| kTfLiteInplaceOpDataUnmodified |
This indicates that an op's first output's data is identical to its first input's data, for example Reshape.
|
| kTfLiteInplaceOpInput0Shared |
kTfLiteInplaceOpInputNShared indicates that it is safe for an op to share InputN's data pointer with an output tensor.
If kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set then kTfLiteInplaceOpInputNShared indicates that InputN may be shared with OutputN, otherwise kTfLiteInplaceOpInputNShared indicates that InputN may be shared with the first output.
Indicates that an op's first input may be shared with the first output tensor. kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput has no impact on the behavior allowed by this flag.
|
| kTfLiteInplaceOpInput1Shared |
Indicates that an op's second input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or second output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.
|
| kTfLiteInplaceOpInput2Shared |
Indicates that an op's third input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or third output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.
|
| kTfLiteInplaceOpMaxValue |
Placeholder to ensure that enum can hold 64 bit values to accommodate future fields.
|
| kTfLiteInplaceOpNone |
The default value.
This indicates that the same data pointer cannot safely be passed as an op's input and output.
|
TfLiteQuantizationType
SupportedQuantizationTypes.
| Properties |
|---|
kTfLiteAffineQuantization |
Affine quantization (with support for per-channel quantization).
Corresponds to TfLiteAffineQuantization.
|
| kTfLiteNoQuantization |
No quantization.
|
TfLiteRunStability
Describes how stable a tensor attribute is with regards to an interpreter runs.
| Properties |
|---|
kTfLiteRunStabilityAcrossRuns |
Will stay the same for one run.
Will stay the same across all runs.
|
| kTfLiteRunStabilitySingleRun |
May change at any time.
|
TfLiteRunStep
Describes the steps of a TFLite operation life cycle.
struct[TfLiteAffineQuantization](/lite/api_docs/c/struct/tf-lite-affine-quantization.html#struct_tf_lite_affine_quantization)TfLiteAffineQuantization
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
quantized_dimension specifies which dimension the scales and zero_points correspond to. For a particular value in quantized_dimension, quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)
enum[TfLiteAllocationStrategy](/lite/api_docs/c/group/common.html#group__common_1gae00888e38fbedcb5130cf359c574580a)TfLiteAllocationStrategy
Memory allocation strategies.
TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.
enum[TfLiteAllocationType](/lite/api_docs/c/group/common.html#group__common_1gae48332e93fec6c3fe7c4ee4897770d4b)TfLiteAllocationType
Memory allocation strategies.
kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.kTfLiteDynamic: Allocated during eval, or for string tensors.kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.int TfLiteBufferHandle
The delegates should use zero or positive integers to represent handles.
-1 is reserved from unallocated status.
struct[TfLiteComplex128](/lite/api_docs/c/struct/tf-lite-complex128.html#struct_tf_lite_complex128)TfLiteComplex128
Double-precision complex data type compatible with the C99 definition.
struct[TfLiteComplex64](/lite/api_docs/c/struct/tf-lite-complex64.html#struct_tf_lite_complex64)TfLiteComplex64
Single-precision complex data type compatible with the C99 definition.
struct[TfLiteContext](/lite/api_docs/c/struct/tf-lite-context.html#struct_tf_lite_context)TfLiteContext
TfLiteContext allows an op to access the tensors.
TfLiteContext is a struct that is created by the TF Lite runtime and passed to the "methods" (C function pointers) in the TfLiteRegistration struct that are used to define custom ops and custom delegate kernels. It contains information and methods (C function pointers) that can be called by the code implementing a custom op or a custom delegate kernel. These methods provide access to the context in which that custom op or custom delegate kernel occurs, such as access to the input and output tensors for that op, as well as methods for allocating memory buffers and intermediate tensors, etc.
See also TfLiteOpaqueContext, which is an more ABI-stable equivalent.
struct[TfLiteCustomAllocation](/lite/api_docs/c/struct/tf-lite-custom-allocation.html#struct_tf_lite_custom_allocation)TfLiteCustomAllocation
Defines a custom memory allocation not owned by the runtime.
data should be aligned to kDefaultTensorAlignment defined in lite/util.h. (Currently 64 bytes) NOTE: See Interpreter::SetCustomAllocationForTensor for details on usage.
enum[TfLiteCustomAllocationFlags](/lite/api_docs/c/group/common.html#group__common_1gab9ba81155474515a58a7281794950836)TfLiteCustomAllocationFlags
The flags used in Interpreter::SetCustomAllocationForTensor.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
struct[TfLiteDelegate](/lite/api_docs/c/struct/tf-lite-delegate.html#struct_tf_lite_delegate)TfLiteDelegate
WARNING: This is an experimental interface that is subject to change.
enum[TfLiteDelegateFlags](/lite/api_docs/c/group/common.html#group__common_1ga69fe39f64b6ac9983afdda112b0256d4)TfLiteDelegateFlags
The flags used in TfLiteDelegate.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
struct[TfLiteDelegateParams](/lite/api_docs/c/struct/tf-lite-delegate-params.html#struct_tf_lite_delegate_params)TfLiteDelegateParams
WARNING: This is an experimental interface that is subject to change.
Currently, TfLiteDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
See also the CreateDelegateParams function in interpreter.cc details.
struct[TfLiteDimensionMetadata](/lite/api_docs/c/struct/tf-lite-dimension-metadata.html#struct_tf_lite_dimension_metadata)TfLiteDimensionMetadata
Metadata to encode each dimension in a sparse tensor.
enum[TfLiteDimensionType](/lite/api_docs/c/group/common.html#group__common_1ga71e0001719140df24b5ae660cd0c322e)TfLiteDimensionType
Storage format of each dimension in a sparse tensor.
struct[TfLiteEvalTensor](/lite/api_docs/c/struct/tf-lite-eval-tensor.html#struct_tf_lite_eval_tensor)TfLiteEvalTensor
Light-weight tensor struct for TF Micro runtime.
Provides the minimal amount of information required for a kernel to run during TfLiteRegistration::Eval.
struct[TfLiteExternalContext](/lite/api_docs/c/struct/tf-lite-external-context.html#struct_tf_lite_external_context)TfLiteExternalContext
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
TF Lite knows very little about the actual contexts, but it keeps a list of them, and is able to refresh them if configurations like the number of recommended threads change.
enum[TfLiteExternalContextType](/lite/api_docs/c/group/common.html#group__common_1ga6d1d3582ab46e837f9f108839232dc03)TfLiteExternalContextType
The list of external context types known to TF Lite.
This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.
struct[TfLiteFloat16](/lite/api_docs/c/struct/tf-lite-float16.html#struct_tf_lite_float16)TfLiteFloat16
Half precision data type compatible with the C99 definition.
struct[TfLiteFloatArray](/lite/api_docs/c/struct/tf-lite-float-array.html#struct_tf_lite_float_array)TfLiteFloatArray
Fixed size list of floats. Used for per-channel quantization.
struct[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)TfLiteIntArray
Fixed size list of integers.
Used for dimensions and inputs/outputs tensor indices
struct[TfLiteNode](/lite/api_docs/c/struct/tf-lite-node.html#struct_tf_lite_node)TfLiteNode
A structure representing an instance of a node.
This structure only exhibits the inputs, outputs, user defined data and some node properties (like statefulness), not other features like the type.
struct[TfLiteOpaqueDelegateBuilder](/lite/api_docs/c/struct/tf-lite-opaque-delegate-builder.html#struct_tf_lite_opaque_delegate_builder)TfLiteOpaqueDelegateBuilder
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
Note: This struct is not ABI stable.
For forward source compatibility TfLiteOpaqueDelegateBuilder objects should be brace-initialized, so that all fields (including any that might be added in the future) get zero-initialized. The purpose of each field is exactly the same as with TfLiteDelegate.
WARNING: This is an experimental interface that is subject to change.
struct[TfLiteOpaqueDelegateParams](/lite/api_docs/c/struct/tf-lite-opaque-delegate-params.html#struct_tf_lite_opaque_delegate_params)TfLiteOpaqueDelegateParams
WARNING: This is an experimental interface that is subject to change.
Currently, TfLiteOpaqueDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
See also the CreateOpaqueDelegateParams function in subgraph.cc details.
union[TfLitePtrUnion](/lite/api_docs/c/union/tf-lite-ptr-union.html#union_tf_lite_ptr_union)TfLitePtrUnion
A union of pointers that points to memory for a given tensor.
Do not access these members directly, if possible, use GetTensorData(tensor) instead, otherwise only access .data, as other members are deprecated.
struct[TfLiteQuantization](/lite/api_docs/c/struct/tf-lite-quantization.html#struct_tf_lite_quantization)TfLiteQuantization
Structure specifying the quantization used by the tensor, if-any.
enum[TfLiteQuantizationType](/lite/api_docs/c/group/common.html#group__common_1ga9a7dad0b2e1bc9afe44b055915cbddb8)TfLiteQuantizationType
SupportedQuantizationTypes.
struct[TfLiteRegistration](/lite/api_docs/c/struct/tf-lite-registration.html#struct_tf_lite_registration)TfLiteRegistration
TfLiteRegistration defines the implementation of an operation (a built-in op, custom op, or custom delegate kernel).
It is a struct containing "methods" (C function pointers) that will be invoked by the TF Lite runtime to evaluate instances of the operation.
See also TfLiteRegistrationExternal which is a more ABI-stable equivalent.
struct[TfLiteRegistrationExternal](/lite/api_docs/c/group/common.html#group__common_1gac0d70c820dfb7187a23e38e14deec7eb)TfLiteRegistrationExternal
TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).
The purpose of each field is the exactly the same as with TfLiteRegistration.
struct TfLiteRegistration_V1 TfLiteRegistration_V1
Old version of TfLiteRegistration to maintain binary backward compatibility.
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
struct TfLiteRegistration_V2 TfLiteRegistration_V2
Old version of TfLiteRegistration to maintain binary backward compatibility.
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
struct TfLiteRegistration_V3 TfLiteRegistration_V3
Old version of TfLiteRegistration to maintain binary backward compatibility.
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
enum[TfLiteRunStability](/lite/api_docs/c/group/common.html#group__common_1ga8e48d3a995a7dea060434068920c5b23)TfLiteRunStability
Describes how stable a tensor attribute is with regards to an interpreter runs.
enum[TfLiteRunStep](/lite/api_docs/c/group/common.html#group__common_1gaa07ed5a55fa2bff442239a17b6c371d9)TfLiteRunStep
Describes the steps of a TFLite operation life cycle.
struct[TfLiteSparsity](/lite/api_docs/c/struct/tf-lite-sparsity.html#struct_tf_lite_sparsity)TfLiteSparsity
Parameters used to encode a sparse tensor.
For detailed explanation of each field please refer to lite/schema/schema.fbs.
struct[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)TfLiteTensor
A tensor in the interpreter system which is a wrapper around a buffer of data including a dimensionality (or NULL if not currently defined).
const int kTfLiteMaxSharableOpInputs = 3
The number of shareable inputs supported.
[TfLiteDelegate](/lite/api_docs/c/struct/tf-lite-delegate.html#struct_tf_lite_delegate)TfLiteDelegateCreate(
void
)
Build a null delegate, with all the fields properly set to their default values.
[TfLiteFloatArray](/lite/api_docs/c/struct/tf-lite-float-array.html#struct_tf_lite_float_array)* TfLiteFloatArrayCopy(
const[TfLiteFloatArray](/lite/api_docs/c/struct/tf-lite-float-array.html#struct_tf_lite_float_array)*src
)
Create a copy of an array passed as src.
You are expected to free memory with TfLiteFloatArrayFree.
[TfLiteFloatArray](/lite/api_docs/c/struct/tf-lite-float-array.html#struct_tf_lite_float_array)* TfLiteFloatArrayCreate(
int size
)
Create a array of a given size (uninitialized entries).
This returns a pointer, that you must free using TfLiteFloatArrayFree().
void TfLiteFloatArrayFree([TfLiteFloatArray](/lite/api_docs/c/struct/tf-lite-float-array.html#struct_tf_lite_float_array)*a
)
Free memory of array a.
int TfLiteFloatArrayGetSizeInBytes(
int size
)
Given the size (number of elements) in a TfLiteFloatArray, calculate its size in bytes.
[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)* TfLiteIntArrayCopy(
const[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*src
)
Create a copy of an array passed as src.
You are expected to free memory with TfLiteIntArrayFree
[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)* TfLiteIntArrayCreate(
int size
)
Create a array of a given size (uninitialized entries).
This returns a pointer, that you must free using TfLiteIntArrayFree().
int TfLiteIntArrayEqual(
const[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*a,
const[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*b
)
Check if two intarrays are equal. Returns 1 if they are equal, 0 otherwise.
int TfLiteIntArrayEqualsArray(
const[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*a,
int b_size,
const int b_data[]
)
Check if an intarray equals an array. Returns 1 if equals, 0 otherwise.
void TfLiteIntArrayFree([TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*a
)
Free memory of array a.
size_t TfLiteIntArrayGetSizeInBytes(
int size
)
Given the size (number of elements) in a TfLiteIntArray, calculate its size in bytes.
[TfLiteOpaqueDelegate](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gac2bc3e65b2b4dfe997134c006faa442f)* TfLiteOpaqueDelegateCreate(
const[TfLiteOpaqueDelegateBuilder](/lite/api_docs/c/struct/tf-lite-opaque-delegate-builder.html#struct_tf_lite_opaque_delegate_builder)*opaque_delegate_builder
)
Creates an opaque delegate and returns its address.
The opaque delegate will behave according to the provided opaque_delegate_builder. The lifetime of the objects pointed to by any of the fields within the opaque_delegate_builder must outlive the returned TfLiteOpaqueDelegate and any TfLiteInterpreter, TfLiteInterpreterOptions, tflite::Interpreter, or tflite::InterpreterBuilder that the delegate is added to. The returned address should be passed to TfLiteOpaqueDelegateDelete for deletion. If opaque_delegate_builder is a null pointer, then a null pointer will be returned.
void TfLiteOpaqueDelegateDelete([TfLiteOpaqueDelegate](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gac2bc3e65b2b4dfe997134c006faa442f)*delegate
)
Deletes the provided opaque delegate.
This function has no effect if the delegate is a null pointer.
void * TfLiteOpaqueDelegateGetData(
const[TfLiteOpaqueDelegate](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gac2bc3e65b2b4dfe997134c006faa442f)*delegate
)
Returns a pointer to the data associated with the provided opaque delegate.
A null pointer will be returned when:
delegate is null.data field of the TfLiteOpaqueDelegateBuilder used to construct the delegate was null.delegate has been constructed via a TfLiteOpaqueDelegateBuilder, but the data field of the TfLiteOpaqueDelegateBuilder is null.The data_ field of delegate will be returned if the opaque_delegate_builder field is null.void TfLiteQuantizationFree([TfLiteQuantization](/lite/api_docs/c/struct/tf-lite-quantization.html#struct_tf_lite_quantization)*quantization
)
Free quantization data.
void TfLiteSparsityFree([TfLiteSparsity](/lite/api_docs/c/struct/tf-lite-sparsity.html#struct_tf_lite_sparsity)*sparsity
)
Free sparsity parameters.
[TfLiteStatus](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gacf79d2fb5fa520303014d1303f1f6361)TfLiteTensorCopy(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*src,[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*dst
)
Copies the contents of src in dst.
Function does nothing if either src or dst is passed as nullptr and return kTfLiteOk. Returns kTfLiteError if src and dst doesn't have matching data size. Note function copies contents, so it won't create new data pointer or change allocation type. All Tensor related properties will be copied from src to dst like quantization, sparsity, ...
void TfLiteTensorDataFree([TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Free data memory of tensor t.
void TfLiteTensorFree([TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Free memory of tensor t.
[TfLiteAllocationStrategy](/lite/api_docs/c/group/common.html#group__common_1gae00888e38fbedcb5130cf359c574580a)TfLiteTensorGetAllocationStrategy(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Returns a tensor data allocation strategy.
[TfLiteRunStability](/lite/api_docs/c/group/common.html#group__common_1ga8e48d3a995a7dea060434068920c5b23)TfLiteTensorGetBufferAddressStability(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Returns how stable a tensor data buffer address is across runs.
[TfLiteRunStep](/lite/api_docs/c/group/common.html#group__common_1gaa07ed5a55fa2bff442239a17b6c371d9)TfLiteTensorGetDataKnownStep(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Returns the operation step when the data of a tensor is populated.
Some operations can precompute their results before the evaluation step. This makes the data available earlier for subsequent operations.
[TfLiteRunStability](/lite/api_docs/c/group/common.html#group__common_1ga8e48d3a995a7dea060434068920c5b23)TfLiteTensorGetDataStability(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Returns how stable a tensor data values are across runs.
[TfLiteRunStep](/lite/api_docs/c/group/common.html#group__common_1gaa07ed5a55fa2bff442239a17b6c371d9)TfLiteTensorGetShapeKnownStep(
const[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*t
)
Returns the operation steop when the shape of a tensor is computed.
Some operations can precompute the shape of their results before the evaluation step. This makes the shape available earlier for subsequent operations.
[TfLiteStatus](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gacf79d2fb5fa520303014d1303f1f6361)TfLiteTensorRealloc(
size_t num_bytes,[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*tensor
)
Change the size of the memory block owned by tensor to num_bytes.
Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. Tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.
void TfLiteTensorReset([TfLiteType](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1ga8a47ba81bdef28b5c479ee7928a7d123)type,
const char *name,[TfLiteIntArray](/lite/api_docs/c/struct/tf-lite-int-array.html#struct_tf_lite_int_array)*dims,[TfLiteQuantizationParams](/lite/api_docs/c/struct/tf-lite-quantization-params.html#struct_tf_lite_quantization_params)quantization,
char *buffer,
size_t size,[TfLiteAllocationType](/lite/api_docs/c/group/common.html#group__common_1gae48332e93fec6c3fe7c4ee4897770d4b)allocation_type,
const void *allocation,
bool is_variable,[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*tensor
)
Set all of a tensor's fields (and free any previously allocated data).
[TfLiteStatus](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1gacf79d2fb5fa520303014d1303f1f6361)TfLiteTensorResizeMaybeCopy(
size_t num_bytes,[TfLiteTensor](/lite/api_docs/c/struct/tf-lite-tensor.html#struct_tf_lite_tensor)*tensor,
bool preserve_data
)
Change the size of the memory block owned by tensor to num_bytes.
Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. If preserve_data is true, tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.
const char * TfLiteTypeGetName([TfLiteType](/lite/api_docs/c/group/c-api-types.html#group __c__ api__types_1ga8a47ba81bdef28b5c479ee7928a7d123)type
)
Return the name of a given type, for error reporting purposes.