Loads into the provided '*inputs' pointer the starting address of an array of indices representing the tensors that are inputs to the subgraph that is associated with the provided 'opaque_context'.
Loads into the provided '*outputs' pointer the starting address of an array of indices representing the tensors that are outputs to the subgraph that is associated with the provided 'opaque_context'.
Loads into the provided '*variables' pointer the starting address of an array of indices representing the tensors that are variables to the subgraph that is associated with the provided 'opaque_context'.
Reports an error message formed by using the provided 'format' string in combination with the data provided via the unnamed arguments following the 'format' parameter ('...').
Resizes the provided 'tensor' that is associated with the provided 'context' so that the 'tensor's shape matches the dimensionality specified via the provided 'new_size' array.
Given an 'index_of_input', which must be in the range of [0, N), where N is the number of input tensors of the provided 'opaque_node', returns the (global) index of the tensor that holds the input.
Given an 'index_of_output', which must be in the range of [0, N), where N is the number of output tensors of the provided 'opaque_node', returns the (global) index of the tensor that holds the output.
Loads into the provided '*inputs' pointer the starting address of an array of indices representing the tensors that are inputs of the provided 'opaque_node'.
Loads into the provided '*outputs' pointer the starting address of an array of indices representing the tensors that are outputs of the provided 'opaque_node'.
Loads into the provided '*temporaries' pointer the starting address of an array of indices representing the temporary tensors associated with the provided 'opaque_node'.
Retrieves the corresponding TfLiteOpaqueContext of a subgraph given a subgraph index and switches to the delegate context for this subgraph. If an invalid subgraph index is given, then returns kTfLiteError.
-
NOTE: This function is expected to be paired with TfLiteOpaqueContextReleaseSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
Adds an additional tensor and configures its properties based on the provided 'builder', preserving pre-existing Tensor entries.
-
If non-null, the value pointed to by 'new_tensor_index' will be set to the index of the new tensor. Returns 'kTfLiteOk' when the tensor has been added successfully. Returns 'kTfLiteError' in case of failure.
Loads the provided execution_plan associated with the provided opaque_context.
-
Returns kTfLiteOk if the execution_plan was successfully loaded. A return value different from kTfLiteOk indicates a failure and the execution_plan will be left in an unspecified state.
-
-
-
-
TfLiteOpaqueContextGetInputs
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueContextGetInputs(
- const struct TfLiteOpaqueContext *opaque_context,
- const int **inputs,
- int *num_inputs
-)
-
-
Loads into the provided '*inputs' pointer the starting address of an array of indices representing the tensors that are inputs to the subgraph that is associated with the provided 'opaque_context'.
-
The length of the array is loaded into the provided 'num_inputs' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'inputs' and 'num_inputs' in an unspecified state. Calls to 'SetInputs' on the associated subgraph invalidate the loaded pointers.
Given the specified 'opaque_context' and 'node_index', load the caller's opaque '*node' and '*registration_external' pointer.
-
Return 'kTfLiteOk' if both the '*node' as well as the '*registration_external' have been loaded correctly. Any other return code indicates a failure and both '*node' as well as '*registration_external' will be in an unspecified state.
-
A caller can obtain a node's index by calling 'TfLiteOpaqueContextGetExecutionPlan', which provides an array of node indices, sorted in execution order. A node index might also come from the data structures passed to the delegate kernel's callback parameters, like the delegate parameters data structure passed to the 'init' callback that contains an array of node indices that are meant to be handled by the delegate kernel.
-
This function is expected to be called from within a delegate callback, like 'Prepare', or a delegate kernel callback (i.e., a callback registered with a 'TfLiteRegistrationExternal' object).
-
The loaded '*node' and '*registration_external' pointers will generally remain valid for the lifetime of the associated 'opaque_context', but can be invalidated through API calls where delegates get un-applied, like API calls that modify the model graph via a delegate, or if input tensors get re-sized.
Loads metadata of a TF Lite node's custom initialization data.
-
Specifically:
-
Loads into the supplied 'fd' the file descriptor of the file that stores the 'node's custom initialization data. This output parameter will be loaded if the TF Lite runtime has access to the file descriptor, though this is not always the case, e.g. if a client provides a tflite::Model directly to the TF Lite runtime. If 'fd' can be loaded then 'kTfLiteOk' will be returned, otherwise 'kTfLiteError' is returned.
-
Loads into the supplied 'custom_initial_data_offset_in_file' pointer the offset of the 'node's custom init data in the file associated with 'fd'. This output parameter will be set to -1 if the 'node' does not have custom init data set.
-
Loads into the supplied 'custom_initial_data_size' the size of the custom initialization data. This output parameter will be set to -1 if the 'node' does not have custom init data set.
-
-
Returns 'kTfLiteOk' when 'fd' has been loaded successfully and 'kTfLiteError' otherwise. Note that this means that 'kTfLiteOk' can be returned, even if the 'node' does not have custom init data set.
Returns modifiable access to the opaque tensor that corresponds to the specified index and is associated with the provided opaque_context.
-
This requires the index to be between 0 and N - 1, where N is the number of tensors in the model.
-
Typically the tensors associated with the context would be set during the initialization of the interpreter that the context belongs to, through a mechanism like the InterpreterBuilder, and remain unchanged throughout the lifetime of the interpreter. However, there are some circumstances in which the pointer may not remain valid throughout the lifetime of the interpreter, because calls to AddTensors on the interpreter invalidate the returned pointer.
-
The ownership of the tensor remains with the TFLite runtime, meaning the caller should not deallocate the pointer.
-
-
-
-
TfLiteOpaqueContextGetOutputs
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueContextGetOutputs(
- const struct TfLiteOpaqueContext *opaque_context,
- const int **outputs,
- int *num_outputs
-)
-
-
Loads into the provided '*outputs' pointer the starting address of an array of indices representing the tensors that are outputs to the subgraph that is associated with the provided 'opaque_context'.
-
The length of the array is loaded into the provided 'num_outputs' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'outputs' and 'num_outputs' in an unspecified state. Calls to 'SetOutputs' on the associated subgraph invalidate the loaded pointers.
Populates the size in bytes of a provide 'type' into 'bytes'.
-
Returns 'kTfLiteOk' for valid types, and 'kTfLiteError' otherwise.
-
-
-
-
TfLiteOpaqueContextGetVariables
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueContextGetVariables(
- const struct TfLiteOpaqueContext *opaque_context,
- const int **variables,
- int *num_variables
-)
-
-
Loads into the provided '*variables' pointer the starting address of an array of indices representing the tensors that are variables to the subgraph that is associated with the provided 'opaque_context'.
-
The length of the array is loaded into the provided 'num_variables' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'variables' and 'num_variables' in an unspecified state. Calls to 'SetVariables' on the associated subgraph invalidate the loaded pointers.
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueContextMarkSubgraphAsDelegationSkippable(
- TfLiteOpaqueContext *opaque_context,
- int subgraph_index
-)
-
-
Entry point for C API MarkSubgraphAsDelegationSkippable.
-
Marks the subgraph with the given index as "delegation-skippable". Returns kTfLiteOk if the given subgraph index is valid and is successfully marked as delegation-skippable, and an error status if the subgraph index is invalid. If a subgraph is delegation-skippable, then the subgraph will be handled by a specific TfLiteOpaqueDelegate that is already supposed to be aware of this condition, and therefore, TfLiteInterpreter can skip invoking ModifyGraphWithDelegate on this subgraph.
-
NOTE: This function is expected to be called only when the subgraph that subgraph_index is pointing to should be skipped by interpreter::ModifyGraphWithDelegate (e.g. the subgraph is part of the list of callee subgraphs of the same control flow node, and all of those callees are supported by the same delegate at once).
-
For example, this function can be used when the delegate is handling control flow ops such as while ops. For instance, a while op has a condition subgraph indexed at i and a body subgraph indexed at j. The op can be delegated when the following conditions hold:
-
The delegate supports while op
-
Both condition subgraph i and body subgraph j can be fully delegated to the delegate.
-
-
Then if the delegate decides to support the while node along with both body and condition subgraphs, it should mark subgraphs i and j skippable so that those two subgraphs won't be delegated to another delegate.
-
WARNING: It is the delegate's responsibility to define when to skip Subgraph::ModifyGraphWithDelegate, to check for any edge cases (i.e. multiple references to the subgraph that subgraph_index is pointing to), and to mark a subgraph as skippable by using this function.
Releases the corresponding TfLiteOpaqueContext by switching back to the TFLite kernel context for this specified subgraph.
-
NOTE: This function is expected to be used after TfLiteOpaqueContextAcquireSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
Entry point for C API ReplaceNodeSubsetsWithDelegateKernels.
-
Replaces the specified nodes_to_replace that are associated with the provided opaque_context with delegate kernels. The provided registration_external represents the delegate kernel and will be used for each node subset that will be delegate to the provided opaque_delegate.
-
The TF Lite runtime will take ownership of the registration_external and will delete it when the associated opaque_context gets destroyed.
-
The ownership of the nodes_to_replace and the opaque_delegate remains with the caller.
Reports an error message formed by using the provided 'format' string in combination with the data provided via the unnamed arguments following the 'format' parameter ('...').
-
The intended usage and behavior is the same as with 'printf' with regards to how the data and the formatting string interact. E.g. 'TfLiteOpaqueContextReportError(opaque_context, "a=%d b=%d", a, b);'
-
The provided 'opaque_context' will be used for reporting the resulting error message.
-
Note that TF Lite clients can use macros like 'TF_LITE_OPAQUE_ENSURE' to check for certain conditions to be true, and print an error message if the condition does not hold. Direct usage of this function from application code should therefore be rare.
Same as TfLiteOpaqueContextReportError, but with the variable arguments passed via a va_list instead of directly.
-
Callers that receive an ellipsis and want to forward it to to the opaque context error reporting API can add the ellipsis content to a va_list and then call TfLiteOpaqueContextReportErrorVa. E.g.:
Resizes the provided 'tensor' that is associated with the provided 'context' so that the 'tensor's shape matches the dimensionality specified via the provided 'new_size' array.
-
Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave the 'tensor' in an unspecified state. The TF Lite runtime takes ownership of the 'new_size' array, even in case of failure.
Returns the builtin data associated with the provided 'opaque_node'.
-
The builtin init data associated with a node would typically be set during the creation of the associated interpreter, through a mechanism like the interpreter builder that loads a TFLite model and initialises the interpreter's nodes accordingly. Under these conditions the returned address remains valid throughout the lifetime of the 'opaque_node'.
Loads into the provided '*init_data' pointer the address of the custom init data associated with the provided 'opaque_node'.
-
The length of data is loaded into the provided 'size' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'init_data' and 'size' in an unspecified state.
-
The custom init data associated with a node would typically be set during the creation of the associated interpreter, through a mechanism like the interpreter builder that loads a TFLite model and initialises the interpreter's nodes accordingly. Under these conditions the returned address remains valid throughout the lifetime of the 'opaque_node'.
TFL_CAPI_EXPORT int TfLiteOpaqueNodeGetInputTensorIndex(
- const TfLiteOpaqueNode *opaque_node,
- int index_of_input
-)
-
-
Given an 'index_of_input', which must be in the range of [0, N), where N is the number of input tensors of the provided 'opaque_node', returns the (global) index of the tensor that holds the input.
-
Returns -1 if 'index_of_input' is not within the [0, N) range.
TFL_CAPI_EXPORT int TfLiteOpaqueNodeGetOutputTensorIndex(
- const TfLiteOpaqueNode *opaque_node,
- int index_of_output
-)
-
-
Given an 'index_of_output', which must be in the range of [0, N), where N is the number of output tensors of the provided 'opaque_node', returns the (global) index of the tensor that holds the output.
-
Returns -1 if 'index_of_output' is not within the [0, N) range.
Returns opaque data provided by the node implementer.
-
The value returned from this function is the value that was returned from the init callback that was passed to TfLiteRegistrationExternalSetInit.
-
-
-
-
TfLiteOpaqueNodeInputs
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueNodeInputs(
- const TfLiteOpaqueNode *opaque_node,
- const int **inputs,
- int *num_inputs
-)
-
-
Loads into the provided '*inputs' pointer the starting address of an array of indices representing the tensors that are inputs of the provided 'opaque_node'.
-
The length of the array is loaded into the provided 'num_inputs' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'inputs' and 'num_inputs' in an unspecified state.
-
The input tensors associated with a node would typically be set during the creation of the associated interpreter, through a mechanism like the interpreter builder that loads a TFLite model and initialises the interpreter's nodes accordingly. Under these conditions the loaded address remains valid throughout the lifetime of the 'opaque_node'.
-
-
-
-
TfLiteOpaqueNodeNumberOfInputs
-
TFL_CAPI_EXPORT int TfLiteOpaqueNodeNumberOfInputs(
- const TfLiteOpaqueNode *opaque_node
-)
-
-
Gets the number of input tensors of the provided 'opaque_node'.
-
-
-
-
TfLiteOpaqueNodeNumberOfOutputs
-
TFL_CAPI_EXPORT int TfLiteOpaqueNodeNumberOfOutputs(
- const TfLiteOpaqueNode *opaque_node
-)
-
-
Gets the number of output tensors of the provided 'opaque_node'.
-
-
-
-
TfLiteOpaqueNodeOutputs
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueNodeOutputs(
- const TfLiteOpaqueNode *opaque_node,
- const int **outputs,
- int *num_outputs
-)
-
-
Loads into the provided '*outputs' pointer the starting address of an array of indices representing the tensors that are outputs of the provided 'opaque_node'.
-
The length of the array is loaded into the provided 'num_outputs' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'outputs' and 'num_outputs' in an unspecified state.
-
The output tensors associated with a node would typically be set during the creation of the associated interpreter, through a mechanism like the interpreter builder that loads a TFLite model and initialises the interpreter's nodes accordingly. Under these conditions the loaded address remains valid throughout the lifetime of the 'opaque_node'.
-
-
-
-
TfLiteOpaqueNodeTemporaries
-
TFL_CAPI_EXPORT TfLiteStatus TfLiteOpaqueNodeTemporaries(
- const TfLiteOpaqueNode *opaque_node,
- const int **temporaries,
- int *num_temporaries
-)
-
-
Loads into the provided '*temporaries' pointer the starting address of an array of indices representing the temporary tensors associated with the provided 'opaque_node'.
-
The length of the array is loaded into the provided 'num_temporaries' pointer. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure and will leave 'temporaries' and 'num_temporaries' in an unspecified state.
-
The temporary tensors associated with a node would typically be set during the creation of the associated interpreter, through a mechanism like the interpreter builder that loads a TFLite model and initialises the interpreter's nodes accordingly. Under these conditions the loaded address remains valid throughout the lifetime of the 'opaque_node'.
Sets the allocation type of the provided 'builder' to the provided 'allocation_type'.
-
The 'allocation_type' must be one of the following: 'kTfLiteDynamic', 'kTfLiteArenaRw' or 'kTfLiteArenaRwPersistent'. If the provided 'allocation_type' is not one of those values then 'TfLiteOpaqueContextAddTensor' will return an error. Returns the address of the provided 'builder', so that builder calls can be chained together.
Loads into the provided 'num_dims' the number of dimensions that the tensor's signature has.
-
Returns 'kTfLiteOk' if 'num_dims' was successfully loaded. Any other return code indicates an error and 'num_dims' won't be loaded.
-
A tensor's dimension signature encodes shapes with unknown dimensions with -1. E.g. for a tensor with three dimensions, whose first dimension has an unknown size, and the second and third dimension have a size of 2, the dimension signature is [-1,2,2], and 'TfLiteOpaqueTensorGetNumDimsSignature' loads 3 into 'num_dims'. If the tensor does not have its dimension signature field set then 'num_dims' is set to -1.
Returns the operation step when the shape of a tensor is computed.
-
-
-
-
TfLiteOpaqueTensorGetString
-
TfLiteStatus TfLiteOpaqueTensorGetString(
- const TfLiteOpaqueTensor *tensor,
- int index,
- const char **str,
- int *len
-)
-
-
Stores the address of the n-th (denoted by the provided 'index') string contained in the provided 'tensor' in the provided '*str' pointer.
-
Stores the length of the string in the provided '*len' argument.
-
Returns 'kTfLiteOk' if '*str' and '*len' have been set successfully. Any other return value indicates a failure, which leaves '*str' and '*len' in an unspecified state.
-
The range of valid indices is defined by the half open interval [0, N), where N == TfLiteOpaqueTensorGetStringCount(tensor).
-
Note that 'str' is not guaranteed to be null-terminated. Also note that this function will not create a copy of the underlying string data. The data is owned by the 'tensor'.
-
-
-
-
TfLiteOpaqueTensorGetStringCount
-
int TfLiteOpaqueTensorGetStringCount(
- const TfLiteOpaqueTensor *tensor
-)
-
-
Returns the number of strings stored in the provided 'tensor'.
-
Returns -1 in case of failure.
-
-
-
-
TfLiteOpaqueTensorIsVariable
-
TFL_CAPI_EXPORT int TfLiteOpaqueTensorIsVariable(
- const TfLiteOpaqueTensor *opaque_tensor
-)
-
-
Returns 'non-zero' if the provided 'opaque_tensor' is a variable, and returns zero otherwise.
Writes the string pointed to by the provided 'str' pointer of length 'len' into the provided 'tensor'.
-
The string provided via 'str' is copied into the 'tensor'. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure.
-
Note that calling 'TfLiteOpaqueTensorWriteString' deallocates any previously stored data in the 'tensor'. E.g. suppose 't' denotes a 'TfLiteOpaqueTensor*', then calling 'TfLiteOpaqueTensorWriteString(t, "AB", 2)' followed by a call to 'TfLiteOpaqueTensorWriteString(t, "CD", 2)' will lead to 't' containing 'CD', not 'ABCD'.
-
'TfLiteOpaqueTensorWriteString' is a convenience function for the use case of writing a single string to a tensor and its effects are identical to calling 'TfLiteOpaqueTensorWriteStrings' with an array of a single string.
-
-
-
-
TfLiteOpaqueTensorWriteStrings
-
TfLiteStatus TfLiteOpaqueTensorWriteStrings(
- TfLiteOpaqueTensor *tensor,
- const char *const *str_array,
- int str_array_len,
- const int *str_n_len
-)
-
-
Writes the array of strings specified by 'str_array' into the specified 'tensor'.
-
The strings provided via the 'str_array' are being copied into the 'tensor'. Returns 'kTfLiteOk' in case of success. Any other return value indicates a failure.
-
The provided 'str_array_len' must denote the length of 'str_array' and 'str_n_len[i]' must denote the length of the i-th string.
-
The provided strings don't need to be null terminated and may contain embedded null characters. The amount of bytes copied into the 'tensor' is entirely determined by 'str_n_len[i]' and it is the caller's responsibility to set this value correctly to avoid undefined behavior.
-
Also note that calling 'TfLiteOpaqueTensorWriteStrings' deallocates any previously stored data in the 'tensor'.
This file declares types used by the pure C inference API defined in c_api.h, some of which are also used in the C++ and C kernel and interpreter APIs.
Note that new error status values may be added in future in order to indicate more fine-grained internal states, therefore, applications should not rely on status values being members of the enum.
Note that new error status values may be added in future in order to indicate more fine-grained internal states, therefore, applications should not rely on status values being members of the enum.
Note that new error status values may be added in future in order to indicate more fine-grained internal states, therefore, applications should not rely on status values being members of the enum.
-
-
-
Properties
-
-
-
- kTfLiteApplicationError
-
-
-
Generally referring to an error in applying a delegate due to incompatibility between runtime and delegate, e.g., this error is returned when trying to apply a TF Lite delegate onto a model graph that's already immutable.
-
-
-
-
- kTfLiteCancelled
-
-
-
Generally referring to invocation cancelled by the user.
-
See interpreter::Cancel.
-
-
-
-
- kTfLiteDelegateDataNotFound
-
-
-
Generally referring to serialized delegate data not being found.
-
See tflite::delegates::Serialization.
-
-
-
-
- kTfLiteDelegateDataReadError
-
-
-
Generally referring to data-reading issues in delegate serialization.
-
See tflite::delegates::Serialization.
-
-
-
-
- kTfLiteDelegateDataWriteError
-
-
-
Generally referring to data-writing issues in delegate serialization.
-
See tflite::delegates::Serialization.
-
-
-
-
- kTfLiteDelegateError
-
-
-
Generally referring to an error from a TfLiteDelegate itself.
-
-
-
-
- kTfLiteError
-
-
-
Generally referring to an error in the runtime (i.e. interpreter)
-
-
-
-
- kTfLiteOk
-
-
-
Success.
-
-
-
-
- kTfLiteUnresolvedOps
-
-
-
Generally referring to issues when the TF Lite model has ops that cannot be resolved at runtime.
-
This could happen when the specific op is not registered or built with the TF Lite framework.
TfLiteOpaqueDelegateStruct: unconditionally opaque version of TfLiteDelegate; allows delegation of nodes to alternative backends.
-
This is an abstract type that is intended to have the same role as TfLiteDelegate, but without exposing the implementation details of how delegates are implemented.
-
WARNING: This is an experimental type and subject to change.
Will be deprecated in favor of TfLiteAffineQuantization. If per-layer quantization is specified this field will still be populated in addition to TfLiteAffineQuantization. Parameters for asymmetric quantization. Quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)
Note that new error status values may be added in future in order to indicate more fine-grained internal states, therefore, applications should not rely on status values being members of the enum.
The API leans towards simplicity and uniformity instead of convenience, as most usage will be by language-specific wrappers. It provides largely the same set of functionality as that of the C++ TensorFlow Lite Interpreter API, but is useful for shared libraries where having a stable ABI boundary is important.
-
Conventions:
-
We use the prefix TfLite for everything in the API.
-
size_t is used to represent byte sizes of objects that are materialized in the address space of the calling process.
-
int is used as an index into arrays.
-
-
Usage:
-// Create the model and interpreter options.
-TfLiteModel* model = TfLiteModelCreateFromFile("/path/to/model.tflite");
-TfLiteInterpreterOptions* options = TfLiteInterpreterOptionsCreate();
-TfLiteInterpreterOptionsSetNumThreads(options, 2);
-
-
// Create the interpreter.
-TfLiteInterpreter* interpreter = TfLiteInterpreterCreate(model, options);
// Dispose of the model and interpreter objects.
-TfLiteInterpreterDelete(interpreter);
-TfLiteInterpreterOptionsDelete(options);
-TfLiteModelDelete(model);
-
Returns a pointer to a statically allocated string that is the version number of the TF Lite Extension APIs supported by the (potentially dynamically loaded) TF Lite Runtime library. The TF Lite "Extension APIs" are the APIs for extending TF Lite with custom ops and delegates. More specifically, this version number covers the (non-experimental) functionality documented in the following header files:
-
-
-
lite/c/c_api_opaque.h
-
lite/c/common.h
-
lite/c/builtin_op_data.h
-
lite/builtin_ops.h
-
-
-
This version number uses semantic versioning, and the return value should be in semver 2 format http://semver.org, starting with MAJOR.MINOR.PATCH, e.g. "2.14.0" or "2.15.0-rc2".
Returns a new interpreter using the provided model and options, or null on failure.
-
-
-
model must be a valid model instance. The caller retains ownership of the object, and may destroy it (via TfLiteModelDelete) immediately after creating the interpreter. However, if the TfLiteModel was allocated with TfLiteModelCreate, then the model_data buffer that was passed to TfLiteModelCreate must outlive the lifetime of the TfLiteInterpreter object that this function returns, and must not be modified during that time; and if the TfLiteModel was allocated with TfLiteModelCreateFromFile, then the contents of the model file must not be modified during the lifetime of the TfLiteInterpreter object that this function returns.
-
optional_options may be null. The caller retains ownership of the object, and can safely destroy it (via TfLiteInterpreterOptionsDelete) immediately after creating the interpreter.
(i) (recommended) using the Interpreter to initialize SignatureRunner(s) and then only using SignatureRunner APIs.
-
(ii) only using Interpreter APIs.
-
NOTE:
-
Only use one of the above options to run inference, i.e. avoid mixing both SignatureRunner APIs and Interpreter APIs to run inference as they share the same underlying data (e.g. updating an input tensor “A” retrieved using the Interpreter APIs will update the state of the input tensor “B” retrieved using SignatureRunner APIs, if they point to the same underlying tensor in the model; as it is not possible for a user to debug this by analyzing the code, it can lead to undesirable behavior).
-
The TfLiteSignatureRunner type is conditionally thread-safe, provided that no two threads attempt to simultaneously access two TfLiteSignatureRunner instances that point to the same underlying signature, or access a TfLiteSignatureRunner and its underlying TfLiteInterpreter, unless all such simultaneous accesses are reads (rather than writes).
-
The lifetime of a TfLiteSignatureRunner object ends when TfLiteSignatureRunnerDelete() is called on it (or when the lifetime of the underlying TfLiteInterpreter ends but you should call TfLiteSignatureRunnerDelete() before that happens in order to avoid resource leaks).
-
You can only apply delegates to the interpreter (via TfLiteInterpreterOptions) and not to a signature. Returns the number of signatures defined in the model.
Returns modifiable access to the tensor that corresponds to the specified index and is associated with the provided interpreter.
-
This requires the index to be between 0 and N - 1, where N is the number of tensors in the model.
-
Typically the tensors associated with the interpreter would be set during the interpreter initialization, through a mechanism like the InterpreterBuilder, and remain unchanged throughout the lifetime of the interpreter. However, there are some circumstances in which the pointer may not remain valid throughout the lifetime of the interpreter, because calls to AddTensors on the interpreter invalidate the returned pointer.
-
Note the difference between this function and TfLiteInterpreterGetInputTensor (or TfLiteInterpreterGetOutputTensor for that matter): TfLiteInterpreterGetTensor takes an index into the array of all tensors associated with the interpreter's model, whereas TfLiteInterpreterGetInputTensor takes an index into the array of input tensors.
-
The ownership of the tensor remains with the TFLite runtime, meaning the caller should not deallocate the pointer.
-
-
-
-
TfLiteInterpreterInputTensorIndices
-
TFL_CAPI_EXPORT const int * TfLiteInterpreterInputTensorIndices(
- const TfLiteInterpreter *interpreter
-)
-
-
Returns a pointer to an array of input tensor indices.
-
The length of the array can be obtained via a call to TfLiteInterpreterGetInputTensorCount.
-
Typically the input tensors associated with an interpreter would be set during the initialization of the interpreter, through a mechanism like the InterpreterBuilder, and remain unchanged throughout the lifetime of the interpreter. However, there are some circumstances in which the pointer may not remain valid throughout the lifetime of the interpreter, because calls to SetInputs on the interpreter invalidate the returned pointer.
-
The ownership of the array remains with the TFLite runtime.
Before calling this function, the caller should first invoke TfLiteInterpreterAllocateTensors() and should also set the values for the input tensors. After successfully calling this function, the values for the output tensors will be set.
-
-If the (experimental!) delegate fallback option was enabled in the interpreter options, then the interpreter will automatically fall back to not using any delegates if execution with delegates fails. For details, see TfLiteInterpreterOptionsSetEnableDelegateFallback in c_api_experimental.h.
-
Returns one of the following status codes:
-
kTfLiteOk: Success. Output is valid.
-
kTfLiteDelegateError: Execution with delegates failed, due to a problem with the delegate(s). If fallback was not enabled, output is invalid. If fallback was enabled, this return value indicates that fallback succeeded, the output is valid, and all delegates previously applied to the interpreter have been undone.
-
kTfLiteApplicationError: Same as for kTfLiteDelegateError, except that the problem was not with the delegate itself, but rather was due to an incompatibility between the delegate(s) and the interpreter or model.
-
kTfLiteError: Unexpected/runtime failure. Output is invalid.
Adds a delegate to be applied during TfLiteInterpreter creation.
-
If delegate application fails, interpreter creation will also fail with an associated error logged.
-
-If you are NOT using "TensorFlow Lite in Play Services", and NOT building with TFLITE_WITH_STABLE_ABI or TFLITE_USE_OPAQUE_DELEGATE macros enabled, it is possible to pass a TfLiteDelegate* rather than a TfLiteOpaqueDelegate* to this function, since in those cases, TfLiteOpaqueDelegate is just a typedef alias for TfLiteDelegate. This is for compatibility with existing source code and existing delegates. For new delegates, it is recommended to use TfLiteOpaqueDelegate rather than TfLiteDelegate. (See TfLiteOpaqueDelegate in tensorflow/lite/core/c/c_api_types.h.)
Adds an op registration to be applied during TfLiteInterpreter creation.
-
The TfLiteRegistrationExternal object is needed to implement custom op of TFLite Interpreter via C API. Calling this function ensures that any TfLiteInterpreter created with the specified options can execute models that use the custom operator specified in registration. Please refer https://www.tensorflow.org/lite/guide/ops_custom for custom op support. This is an experimental API and subject to change.
Sets the number of CPU threads to use for the interpreter.
-
-
-
-
TfLiteInterpreterOutputTensorIndices
-
TFL_CAPI_EXPORT const int * TfLiteInterpreterOutputTensorIndices(
- const TfLiteInterpreter *interpreter
-)
-
-
Returns a pointer to an array of output tensor indices.
-
The length of the array can be obtained via a call to TfLiteInterpreterGetOutputTensorCount.
-
Typically the output tensors associated with an interpreter would be set during the initialization of the interpreter, through a mechanism like the InterpreterBuilder, and remain unchanged throughout the lifetime of the interpreter. However, there are some circumstances in which the pointer may not remain valid throughout the lifetime of the interpreter, because calls to SetOutputs on the interpreter invalidate the returned pointer.
-
The ownership of the array remains with the TFLite runtime.
TFL_CAPI_EXPORT int TfLiteSchemaVersion(
- void
-)
-
-
The supported TensorFlow Lite model file Schema version.
-
Returns the (major) version number of the Schema used for model files that is supported by the (potentially dynamically loaded) TensorFlow Lite Runtime.
-
Model files using schema versions different to this may not be supported by the current version of the TF Lite Runtime.
Before calling this function, the caller should first invoke TfLiteSignatureRunnerAllocateTensors() and should also set the values for the input tensors. After successfully calling this function, the values for the output tensors will be set.
Resizes the input tensor identified as input_name to be the dimensions specified by input_dims and input_dims_size.
-
Only unknown dimensions can be resized with this function. Unknown dimensions are indicated as -1 in the dims_signature attribute of a TfLiteTensor.
-
Returns status of failure or success. Note that this doesn't actually resize any existing buffers. A call to TfLiteSignatureRunnerAllocateTensors() is required to change the tensor input buffer.
Returns the parameters for asymmetric quantization.
-
The quantization parameters are only valid when the tensor type is kTfLiteUInt8 and the scale != 0. Quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point);
Returns a pointer to a statically allocated string that is the version number of the (potentially dynamically loaded) TF Lite Runtime library. TensorFlow Lite uses semantic versioning, and the return value should be in semver 2 format http://semver.org, starting with MAJOR.MINOR.PATCH, e.g. "2.12.0" or "2.13.0-rc2".
TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).
A union of pointers that points to memory for a given tensor.
-
-
-
-
Enumerations
-
-
Anonymous Enum 0
-
Anonymous Enum 0
-
-
-
-
TfLiteAllocationStrategy
-
TfLiteAllocationStrategy
-
-
Memory allocation strategies.
-
TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.
-
-
-
Properties
-
-
-
- kTfLiteAllocationStrategyArena
-
-
-
Data is mmaped.
-
-
-
-
- kTfLiteAllocationStrategyMMap
-
-
-
No data is allocated.
-
-
-
-
- kTfLiteAllocationStrategyMalloc
-
-
-
Handled by the arena.
-
-
-
-
- kTfLiteAllocationStrategyNew
-
-
-
Uses malloc/free.
-
Uses new[]/delete[].
-
-
-
-
-
-
-
TfLiteAllocationType
-
TfLiteAllocationType
-
-
Memory allocation strategies.
-
-
-
kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.
-
kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.
-
kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.
-
kTfLiteDynamic: Allocated during eval, or for string tensors.
-
kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).
-
kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.
-
kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.
-
-
-
-
-
-
TfLiteCustomAllocationFlags
-
TfLiteCustomAllocationFlags
-
-
The flags used in Interpreter::SetCustomAllocationForTensor.
-
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
-
-
-
Properties
-
-
-
- kTfLiteCustomAllocationFlagsSkipAlignCheck
-
-
-
Skips checking whether allocation.data points to an aligned buffer as expected by the TFLite runtime.
-
NOTE: Setting this flag can cause crashes when calling Invoke(). Use with caution.
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
-
-
-
Properties
-
-
-
- kTfLiteDelegateFlagsAllowDynamicTensors
-
-
-
The flag is set if the delegate can handle dynamic sized tensors.
-
For example, the output shape of a Resize op with non-constant shape can only be inferred when the op is invoked. In this case, the Delegate is responsible for calling SetTensorToDynamic to mark the tensor as a dynamic tensor, and calling ResizeTensor when invoking the op.
-
If the delegate isn't capable to handle dynamic tensors, this flag need to be set to false.
-
-
-
-
- kTfLiteDelegateFlagsPerOperatorProfiling
-
-
-
This flag can be used by delegates to request per-operator profiling.
-
If a node is a delegate node, this flag will be checked before profiling. If set, then the node will not be profiled. The delegate will then add per operator information using Profiler::EventType::OPERATOR_INVOKE_EVENT and the results will appear in the operator-wise Profiling section and not in the Delegate internal section.
-
-
-
-
- kTfLiteDelegateFlagsRequirePropagatedShapes
-
-
-
This flag can be used by delegates (that allow dynamic tensors) to ensure applicable tensor shapes are automatically propagated in the case of tensor resizing.
-
This means that non-dynamic (allocation_type != kTfLiteDynamic) I/O tensors of a delegate kernel will have correct shapes before its Prepare() method is called. The runtime leverages TFLite builtin ops in the original execution plan to propagate shapes.
-
A few points to note:
-
This requires kTfLiteDelegateFlagsAllowDynamicTensors. If that flag is false, this one is redundant since the delegate kernels are re-initialized every time tensors are resized.
-
Enabling this flag adds some overhead to AllocateTensors(), since extra work is required to prepare the original execution plan.
-
This flag requires that the original execution plan only have ops with valid registrations (and not 'dummy' custom ops like with Flex).
-
-
WARNING: This feature is experimental and subject to change.
-
-
-
-
-
-
-
TfLiteDimensionType
-
TfLiteDimensionType
-
-
Storage format of each dimension in a sparse tensor.
-
-
-
-
TfLiteExternalContextType
-
TfLiteExternalContextType
-
-
The list of external context types known to TF Lite.
-
This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.
This allow an op to signal to the runtime that the same data pointer may be passed as an input and output without impacting the result. This does not mean that the memory can safely be reused, it is up to the runtime to determine this, e.g. if another op consumes the same input or not or if an input tensor has sufficient memory allocated to store the output data.
-
Setting these flags authorizes the runtime to set the data pointers of an input and output tensor to the same value. In such cases, the memory required by the output must be less than or equal to that required by the shared input, never greater. If kTfLiteInplaceOpDataUnmodified is set, then the runtime can share the same input tensor with multiple operator's outputs, provided that kTfLiteInplaceOpDataUnmodified is set for all of them. Otherwise, if an input tensor is consumed by multiple operators, it may only be shared with the operator which is the last to consume it.
-
Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
Setting kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput means that InputN may be shared with OutputN instead of with the first output.
-
This flag requires one or more of kTfLiteInplaceOpInputNShared to be set.
-
-
-
-
- kTfLiteInplaceOpDataUnmodified
-
-
-
This indicates that an op's first output's data is identical to its first input's data, for example Reshape.
-
-
-
-
- kTfLiteInplaceOpInput0Shared
-
-
-
kTfLiteInplaceOpInputNShared indicates that it is safe for an op to share InputN's data pointer with an output tensor.
-
If kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set then kTfLiteInplaceOpInputNShared indicates that InputN may be shared with OutputN, otherwise kTfLiteInplaceOpInputNShared indicates that InputN may be shared with the first output.
-
Indicates that an op's first input may be shared with the first output tensor. kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput has no impact on the behavior allowed by this flag.
-
-
-
-
- kTfLiteInplaceOpInput1Shared
-
-
-
Indicates that an op's second input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or second output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.
-
-
-
-
- kTfLiteInplaceOpInput2Shared
-
-
-
Indicates that an op's third input may be shared with the first output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is not set or third output if kTfLiteInplaceInputCanBeSharedWithCorrespondingOutput is set.
-
-
-
-
- kTfLiteInplaceOpMaxValue
-
-
-
Placeholder to ensure that enum can hold 64 bit values to accommodate future fields.
-
-
-
-
- kTfLiteInplaceOpNone
-
-
-
The default value.
-
This indicates that the same data pointer cannot safely be passed as an op's input and output.
-
-
-
-
-
-
-
TfLiteQuantizationType
-
TfLiteQuantizationType
-
-
SupportedQuantizationTypes.
-
-
-
Properties
-
-
-
- kTfLiteAffineQuantization
-
-
-
Affine quantization (with support for per-channel quantization).
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
-
quantized_dimension specifies which dimension the scales and zero_points correspond to. For a particular value in quantized_dimension, quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)
TfLiteAllocationType values have been overloaded to mean more than their original intent. This enum should only be used to document the allocation strategy used by a tensor for it data.
kTfLiteMmapRo: Read-only memory-mapped data, or data externally allocated.
-
kTfLiteArenaRw: Arena allocated with no guarantees about persistence, and available during eval.
-
kTfLiteArenaRwPersistent: Arena allocated but persistent across eval, and only available during eval.
-
kTfLiteDynamic: Allocated during eval, or for string tensors.
-
kTfLitePersistentRo: Allocated and populated during prepare. This is useful for tensors that can be computed during prepare and treated as constant inputs for downstream ops (also in prepare).
-
kTfLiteCustom: Custom memory allocation provided by the user. See TfLiteCustomAllocation below.
-
kTfLiteVariantObject: Allocation is an arbitrary type-erased C++ object. Allocation and deallocation are done through new and delete.
-
-
-
-
-
-
TfLiteBufferHandle
-
int TfLiteBufferHandle
-
-
The delegates should use zero or positive integers to represent handles.
TfLiteContext is a struct that is created by the TF Lite runtime and passed to the "methods" (C function pointers) in the TfLiteRegistration struct that are used to define custom ops and custom delegate kernels. It contains information and methods (C function pointers) that can be called by the code implementing a custom op or a custom delegate kernel. These methods provide access to the context in which that custom op or custom delegate kernel occurs, such as access to the input and output tensors for that op, as well as methods for allocating memory buffers and intermediate tensors, etc.
-
See also TfLiteOpaqueContext, which is an more ABI-stable equivalent.
Defines a custom memory allocation not owned by the runtime.
-
data should be aligned to kDefaultTensorAlignment defined in lite/util.h. (Currently 64 bytes) NOTE: See Interpreter::SetCustomAllocationForTensor for details on usage.
WARNING: This is an experimental interface that is subject to change.
-
Currently, TfLiteDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
-
See also the CreateDelegateParams function in interpreter.cc details.
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
-
TF Lite knows very little about the actual contexts, but it keeps a list of them, and is able to refresh them if configurations like the number of recommended threads change.
The list of external context types known to TF Lite.
-
This list exists solely to avoid conflicts and to ensure ops can share the external contexts they need. Access to the external contexts is controlled by one of the corresponding support files.
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
-
Note: This struct is not ABI stable.
-
For forward source compatibility TfLiteOpaqueDelegateBuilder objects should be brace-initialized, so that all fields (including any that might be added in the future) get zero-initialized. The purpose of each field is exactly the same as with TfLiteDelegate.
-
WARNING: This is an experimental interface that is subject to change.
WARNING: This is an experimental interface that is subject to change.
-
Currently, TfLiteOpaqueDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
-
See also the CreateOpaqueDelegateParams function in subgraph.cc details.
TfLiteRegistrationExternal is an external version of TfLiteRegistration for C API which doesn't use internal types (such as TfLiteContext) but only uses stable API types (such as TfLiteOpaqueContext).
-
The purpose of each field is the exactly the same as with TfLiteRegistration.
Old version of TfLiteRegistration to maintain binary backward compatibility.
-
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
-
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
Old version of TfLiteRegistration to maintain binary backward compatibility.
-
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
-
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
Old version of TfLiteRegistration to maintain binary backward compatibility.
-
The legacy registration type must be a POD struct type whose field types must be a prefix of the field types in TfLiteRegistration, and offset of the first field in TfLiteRegistration that is not present in the legacy registration type must be greater than or equal to the size of the legacy registration type.
-
WARNING: This structure is deprecated / not an official part of the API. It should be only used for binary backward compatibility.
Creates an opaque delegate and returns its address.
-
The opaque delegate will behave according to the provided opaque_delegate_builder. The lifetime of the objects pointed to by any of the fields within the opaque_delegate_builder must outlive the returned TfLiteOpaqueDelegate and any TfLiteInterpreter, TfLiteInterpreterOptions, tflite::Interpreter, or tflite::InterpreterBuilder that the delegate is added to. The returned address should be passed to TfLiteOpaqueDelegateDelete for deletion. If opaque_delegate_builder is a null pointer, then a null pointer will be returned.
The delegate has been constructed via a TfLiteOpaqueDelegateBuilder, but the data field of the TfLiteOpaqueDelegateBuilder is null.The data_ field of delegate will be returned if the opaque_delegate_builder field is null.
Function does nothing if either src or dst is passed as nullptr and return kTfLiteOk. Returns kTfLiteError if src and dst doesn't have matching data size. Note function copies contents, so it won't create new data pointer or change allocation type. All Tensor related properties will be copied from src to dst like quantization, sparsity, ...
Returns the operation steop when the shape of a tensor is computed.
-
Some operations can precompute the shape of their results before the evaluation step. This makes the shape available earlier for subsequent operations.
Change the size of the memory block owned by tensor to num_bytes.
-
Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. Tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.
Change the size of the memory block owned by tensor to num_bytes.
-
Tensors with allocation types other than kTfLiteDynamic will be ignored and a kTfLiteOk will be returned. tensor's internal data buffer will be assigned a pointer which can safely be passed to free or realloc if num_bytes is zero. If preserve_data is true, tensor data will be unchanged in the range from the start of the region up to the minimum of the old and new sizes. In the case of NULL tensor, or an error allocating new memory, returns kTfLiteError.
-
-
-
-
TfLiteTypeGetName
-
const char * TfLiteTypeGetName(
- TfLiteType type
-)
-
-
Return the name of a given type, for error reporting purposes.
Type of delegate creation function used to allocate and construct a delegate.
-
The tflite_settings parameter passed to the delegate creation function should be a pointer to a FlatBuffer table object of type tflite::TFLiteSettings. We use const void * here rather than const tflite::TFLiteSettings* since this is a C API so we don't want to directly reference C++ types such as tflite::TFLiteSettings. But note that this address should point to the 'parsed' FlatBuffer object, not the raw byte buffer. (Note that 'parsing' FlatBuffers is very cheap, it's just an offset load.)
-
If you are using the FlatBuffers C API, then you can alternatively pass in a value of type tflite_TFLiteSettings_table_t, which is a typedef for const struct tflite_TFLiteSettings_table* that is the corresponding type for the 'parsed' FlatBuffer object in the FlatBuffers C API.
-
Ownership of the tflite_settings flatbuffer remains with the caller. The caller of a delegate creation function may end the lifetime of the tflite_settings FlatBuffer immediately after the call to the function. So the delegate creation function should ensure that any settings that the delegate may need to reference later, after the delegate has been constructed, are copied from the FlatBuffer into storage owned by the delegate.
This header file is for the delegate plugin for GPU.
-
Summary
-
For the C++ delegate plugin interface, the GPU delegate plugin is added to the DelegatePluginRegistry by the side effect of a constructor for a static object, so there's no public API needed for this plugin, other than the API of tflite::delegates::DelegatePluginRegistrys, which is declared in delegate_registry.h.
-
But to provide a C API to access the GPU delegate plugin, we do expose some functions, which are declared below.
the com.google.android.gms.tflite.java.TfLiteNative.initialize(Context context) or com.google.android.gms.tflite.java.TfLiteNative.initialize(Context context, TfLiteInitializationOptions options) methods defined in the Java API.
Checks whether the TFLite API has been initialized, throwing a Java exception otherwise.
-
-
-
-
Details
-
-
-
-
Parameters
-
-
-
-
-
- env
-
-
-
The JNIEnv for the current thread (which has to be attached to the JVM).
-
-
-
-
-
-
-
-
- Returns
-
-
-
Whether or not the TFLite API has been initialized. If this method returns false, no other JNI method should be called until the pending exception has been handled (typically by returning to Java).
-
-
-
-
-
-
-
-
GmsTfLiteErrorCodeVersionTooNew
-
bool GmsTfLiteErrorCodeVersionTooNew(
- int error_code
-)
-
-
Returns true if the error code indicates the TFLite ABI version is too new.
-
In this case, the client should be updated to a newer version.
-
To avoid this error, make sure that your app is built against the latest version of the TFLite in Google Play Services client library code.
-
If TFLite is important for the functionality of the app, then we recommend that the calling code notify the user in this case. Suggested actions for the user could include:
bool GmsTfLiteErrorCodeVersionTooOld(
- int error_code
-)
-
-
Returns true if the error code indicates that the TFLite ABI version is too old.
-
In this case, the TFLite in Google Play Services module should be updated to a newer version.
-
If TFLite is important for the functionality of the app, then we recommend that the calling code notify the user in this case. Suggested actions for the user could include:
-
Make sure your device is connected to the internet, and
int GmsTfLiteInitialize(
- JNIEnv *env,
- jobjecthandle
-)
-
-
Initialize TFLite with a handle acquired from Google Play Services API.
-
-This method (along with GmsTfLiteInitializeOrThrow()) can be called multiple times with the same handle; attempting to initialize with a different handle (without a call to GmsTfLiteShutdown() in between) will fail.
-
-
-
Details
-
-
-
-
Parameters
-
-
-
-
-
- env
-
-
-
The JNIEnv for the current thread (which has to be attached to the JVM).
-
-
-
-
- handle
-
-
-
An InternalNativeInitializationHandle object acquired through the Google Play Services API.
-
-
-
-
-
-
-
-
- Returns
-
-
-
0 on success, or a non-zero error code on failure. The error codes are implementation-specific, but error conditions that clients may need to deal with can be tested using the GmsTfLiteErrorCodeVersionTooOld() and GmsTfLiteErrorCodeVersionTooNew() functions. Clients may also wish to log the specific error code for ease of debugging.
Initialize TFLite with a handle acquired from Google Play Services API, throwing a Java exception on failure.
-
-This method (along with GmsTfLiteInitialize()) can be called multiple times with the same handle; attempting to initialize with a different handle (without a call to GmsTfLiteShutdown() in between) will fail.
-
-
-
Details
-
-
-
-
Parameters
-
-
-
-
-
- env
-
-
-
The JNIEnv for the current thread (which has to be attached to the JVM).
-
-
-
-
- handle
-
-
-
An InternalNativeInitializationHandle object acquired through the Google Play Services API.
-
-
-
-
-
-
-
-
- Returns
-
-
-
Whether or not initialization was successful. If this method returns false, no other JNI method should be called until the pending exception has been handled (typically by returning to Java).
-
-
-
-
-
-
-
-
GmsTfLiteShutdown
-
void GmsTfLiteShutdown(
- void
-)
-
-
Resets the TFLite API.
-
After this method is called, the TFLite API will be unusable until a subsequent call to GmsTfLiteInitialize() or GmsTfLiteInitializeOrThrow(). This can be used to switch to a different version of the TFLite library.
This header file is for the delegate plugin for XNNPACK.
-
Summary
-
For the C++ delegate plugin interface, the XNNPACK delegate plugin is added to the DelegatePluginRegistry by the side effect of a constructor for a static object, so there's no public API needed for this plugin, other than the API of tflite::delegates::DelegatePluginRegistry, which is declared in delegate_registry.h.
-
But to provide a C API to access the XNNPACK delegate plugin, we do expose some functions, which are declared below.
This file declares types used by the pure C inference API defined in c_api.h, some of which are also used in the C++ and C kernel and interpreter APIs.
Parameters for asymmetric quantization across a dimension (i.e per output channel quantization).
-
Summary
-
quantized_dimension specifies which dimension the scales and zero_points correspond to. For a particular value in quantized_dimension, quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)
TfLiteContext is a struct that is created by the TF Lite runtime and passed to the "methods" (C function pointers) in the TfLiteRegistration struct that are used to define custom ops and custom delegate kernels. It contains information and methods (C function pointers) that can be called by the code implementing a custom op or a custom delegate kernel. These methods provide access to the context in which that custom op or custom delegate kernel occurs, such as access to the input and output tensors for that op, as well as methods for allocating memory buffers and intermediate tensors, etc.
-
See also TfLiteOpaqueContext, which is an more ABI-stable equivalent.
-
-
-
-
Public attributes
-
-
-
-
- AcquireSubgraphContext)(struct TfLiteContext *context, int subgraph_index, struct TfLiteContext **acquired_context)
-
Retrieves the corresponding TfLiteContext of a subgraph that the given subgraph_index points to and switches to the delegate context for that subgraph.
-
-
-
-
- AddTensors)(struct TfLiteContext *, int tensors_to_add, int *first_new_tensor_index)
-
TfLiteStatus(* TfLiteContext::AcquireSubgraphContext)(struct TfLiteContext *context, int subgraph_index, struct TfLiteContext **acquired_context)
-
-
Retrieves the corresponding TfLiteContext of a subgraph that the given subgraph_index points to and switches to the delegate context for that subgraph.
-
If an invalid subgraph index is given, returns kTfLiteError.
-
NOTE: This function is expected to be paired with ReleaseSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
-
WARNING: This is an experimental interface that is subject to change.
-
-
-
-
AddTensors
-
TfLiteStatus(* TfLiteContext::AddTensors)(struct TfLiteContext *, int tensors_to_add, int *first_new_tensor_index)
The execution plan contains a list of the node indices in execution order.
-
execution_plan->size is the current number of nodes. And, execution_plan->data[0] is the first node that needs to be run. TfLiteDelegates can traverse the current execution plan by iterating through each member of this array and using GetNodeAndRegistration() to access details about a node. i.e.
Note: the memory pointed by '*execution_plan is OWNED by TfLite runtime. Future calls to GetExecutionPlan invalidates earlier outputs. The following code snippet shows the issue of such an invocation pattern. After calling CheckNode, subsequent access to plan_1st is undefined.
Retrieves named metadata buffer from the TFLite model.
-
Returns kTfLiteOk if metadata is successfully obtained from the flatbuffer Model: that is, there exists a metadata entry with given name string. (see TFLite's schema.fbs). The corresponding buffer information is populated in ptr & bytes. The data from ptr is valid for the lifetime of the Interpreter.
-
WARNING: This is an experimental interface that is subject to change.
NOTE: The context owns the memory referenced by partition_params_array. It will be cleared with another call to PreviewDelegatePartitioning, or after TfLiteDelegateParams::Prepare returns.
-
WARNING: This is an experimental interface that is subject to change.
-
-
-
-
ReleaseSubgraphContext
-
TfLiteStatus(* TfLiteContext::ReleaseSubgraphContext)(struct TfLiteContext *context, int subgraph_index)
-
-
Releases the subgraph context by switching back to the TFLite kernel context for the subgraph that the given subgraph_index points to.
-
NOTE: This function is expected to be used after AcquireSubgraphContext() once the delegate preparation is done and/or the delegate context functions are no longer needed.
-
WARNING: This is an experimental interface that is subject to change.
Request that an error be reported with format string msg.
-
-
-
-
RequestScratchBufferInArena
-
TfLiteStatus(* TfLiteContext::RequestScratchBufferInArena)(struct TfLiteContext *ctx, size_t bytes, int *buffer_idx)
-
-
Request a scratch buffer in the arena through static memory planning.
-
This method is only available in Prepare stage and the buffer is allocated by the interpreter between Prepare and Eval stage. In Eval stage, GetScratchBuffer API can be used to fetch the address.
-
WARNING: This is an experimental interface that is subject to change.
Updates dimensions on the tensor. NOTE: ResizeTensor takes ownership of newSize.
-
-
-
-
ResizeTensorExplicit
-
TfLiteStatus(* TfLiteContext::ResizeTensorExplicit)(struct TfLiteContext *ctx, TfLiteTensor *tensor, int dims, const int *shape)
-
-
Resize the memory pointer of the tensor.
-
This method behaves the same as ResizeTensor, except that it makes a copy of the shape array internally so the shape array could be deallocated right afterwards.
-
WARNING: This is an experimental interface that is subject to change.
Defines a custom memory allocation not owned by the runtime.
-
Summary
-
data should be aligned to kDefaultTensorAlignment defined in lite/util.h. (Currently 64 bytes) NOTE: See Interpreter::SetCustomAllocationForTensor for details on usage.
WARNING: This is an experimental interface that is subject to change.
-
Summary
-
Currently, TfLiteDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
-
See also the CreateDelegateParams function in interpreter.cc details.
Copy the data from delegate buffer handle into raw memory of the given tensor.
-
Note that the delegate is allowed to allocate the raw bytes as long as it follows the rules for kTfLiteDynamic tensors, in which case this cannot be null.
Note: This only frees the handle, but this doesn't release the underlying resource (e.g. textures). The resources are either owned by application layer or the delegate. This can be null if the delegate doesn't use its own buffer.
This prepare is called, giving the delegate a view of the current graph through TfLiteContext*. It typically will look at the nodes and call ReplaceNodeSubsetsWithDelegateKernels() to ask the TensorFlow lite runtime to create macro-nodes to represent delegated subgraphs of the original graph.
-
-
-
-
data_
-
void * TfLiteDelegate::data_
-
-
Data that delegate needs to identify itself.
-
This data is owned by the delegate. The delegate is owned in the user code, so the delegate is responsible for deallocating this when it is destroyed.
-
-
-
-
flags
-
int64_t TfLiteDelegate::flags
-
-
Bitmask flags. See the comments in TfLiteDelegateFlags.
The opaque delegate builder associated with this object.
-
If set then the TF Lite runtime will give precedence to this field. E.g. instead of invoking Prepare via the function pointer inside the TfLiteDelegate object, the runtime will first check if the corresponding function pointer inside opaque_delegate_builder is set and if so invoke that.
-
If this field is non-null, then the Prepare field (of the TfLiteDelegate) should be null.
An external context is a collection of information unrelated to the TF Lite framework, but useful to a subset of the ops.
-
Summary
-
TF Lite knows very little about the actual contexts, but it keeps a list of them, and is able to refresh them if configurations like the number of recommended threads change.
TfLiteOpaqueDelegateBuilder is used for constructing TfLiteOpaqueDelegate, see TfLiteOpaqueDelegateCreate below.
-
Summary
-
Note: This struct is not ABI stable.
-
For forward source compatibility TfLiteOpaqueDelegateBuilder objects should be brace-initialized, so that all fields (including any that might be added in the future) get zero-initialized. The purpose of each field is exactly the same as with TfLiteDelegate.
-
WARNING: This is an experimental interface that is subject to change.
Copies the data from delegate buffer handle into raw memory of the given tensor.
-
Note that the delegate is allowed to allocate the raw bytes as long as it follows the rules for kTfLiteDynamic tensors, in which case this cannot be null.
Note: This only frees the handle, but this doesn't release the underlying resource (e.g. textures). The resources are either owned by application layer or the delegate. This can be null if the delegate doesn't use its own buffer.
This prepare is called, giving the delegate a view of the current graph through TfLiteContext*. It typically will look at the nodes and call ReplaceNodeSubsetsWithDelegateKernels() to ask the TensorFlow lite runtime to create macro-nodes to represent delegated subgraphs of the original graph.
-
-
-
-
data
-
void * TfLiteOpaqueDelegateBuilder::data
-
-
Data that delegate needs to identify itself.
-
This data is owned by the delegate. The delegate is owned in the user code, so the delegate is responsible for deallocating this when it is destroyed.
-
-
-
-
flags
-
int64_t TfLiteOpaqueDelegateBuilder::flags
-
-
Bitmask flags. See the comments in TfLiteDelegateFlags.
WARNING: This is an experimental interface that is subject to change.
-
Summary
-
Currently, TfLiteOpaqueDelegateParams has to be allocated in a way that it's trivially destructable. It will be stored as builtin_data field in TfLiteNode of the delegate node.
-
See also the CreateOpaqueDelegateParams function in subgraph.cc details.
Will be deprecated in favor of TfLiteAffineQuantization. If per-layer quantization is specified this field will still be populated in addition to TfLiteAffineQuantization. Parameters for asymmetric quantization. Quantized values can be converted back to float using: real_value = scale * (quantized_value - zero_point)
If the async_kernel field is nullptr, it means the operation described by this TfLiteRegistration object does not support asynchronous execution. Otherwise, the function that the field points to should only be called for delegate kernel nodes, i.e. node should be a delegate kernel node created by applying a delegate. If the function returns nullptr, that means that the underlying delegate does not support asynchronous execution for this node.
-
-
-
-
builtin_code
-
int32_t TfLiteRegistration::builtin_code
-
-
Builtin codes.
-
If this kernel refers to a builtin this is the code of the builtin. This is so we can do marshaling to other frameworks like NN API.
-
Note: It is the responsibility of the registration binder to set this properly.
-
-
-
-
custom_name
-
const char * TfLiteRegistration::custom_name
-
-
Custom op name.
-
If the op is a builtin, this will be null.
-
Note: It is the responsibility of the registration binder to set this properly.
-
WARNING: This is an experimental interface that is subject to change.
profiling_string is called during summarization of profiling information in order to group executions together.
-
Providing a value here will cause a given op to appear multiple times is the profiling report. This is particularly useful for custom ops that can perform significantly different calculations depending on their user-data.
Since we can't use internal types (such as TfLiteContext) for C API to maintain ABI stability. C API user will provide TfLiteRegistrationExternal to implement custom ops. We keep it inside of TfLiteRegistration and use it to route callbacks properly.
-
-
-
-
version
-
int TfLiteRegistration::version
-
-
The version of the op.
-
Note: It is the responsibility of the registration binder to set this properly.
An integer buffer handle that can be handled by delegate.
-
The value is valid only when delegate is not null.
-
WARNING: This is an experimental interface that is subject to change.
-
-
-
-
bytes
-
size_t TfLiteTensor::bytes
-
-
The number of bytes required to store the data of this Tensor.
-
I.e. (bytes of each element) * dims[0] * ... * dims[n-1]. For example, if type is kTfLiteFloat32 and dims = {3, 2} then bytes = sizeof(float) * 3 * 2 = 4 * 3 * 2 = 24.
The appropriate type should be used for a typed tensor based on type.
-
-
-
-
data_is_stale
-
bool TfLiteTensor::data_is_stale
-
-
If the delegate uses its own buffer (e.g.
-
GPU memory), the delegate is responsible to set data_is_stale to true. delegate->CopyFromBufferHandle can be called to copy the data from delegate buffer.
-
WARNING: This is an experimental interface that is subject to change.
Encodes shapes with unknown dimensions with -1. This field is only populated when unknown dimensions exist in a read-write tensor (i.e. an input or output tensor). (e.g. dims contains [1, 1, 1, 3] and dims_signature contains [1, -1, -1, 3]). If no unknown dimensions exist then dims_signature is either null, or set to an empty array. Note that this field only exists when TF_LITE_STATIC_MEMORY is not defined.
A functor that reports error to supporting system.
Summary
Invoked similar to printf.
Usage: ErrorReporter foo; foo.Report("test %d", 5); or va_list args; foo.Report("test %d", args); // where args is va_list
Subclass ErrorReporter to provide another reporting destination. For example, if you have a GUI program, you might redirect to a buffer that drives a GUI error log box.
The additional void* parameter is unused. This method is for compatibility with macros that takes TfLiteContext, like TF_LITE_ENSURE and related macros.
Builds a model directly from a flatbuffer pointer Caller retains ownership of the buffer and should keep it alive until the returned object is destroyed.
Ownership of the allocation is passed to the model, but the caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure (e.g., the allocation is invalid).
Caller retains ownership of the buffer and should keep it alive until the returned object is destroyed. Caller also retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure. NOTE: this does NOT validate the buffer so it should NOT be called on invalid/untrusted input. Use VerifyAndBuildFromBuffer in that case
Caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure.
Builds a model directly from a flatbuffer pointer Caller retains ownership of the buffer and should keep it alive until the returned object is destroyed.
-
Caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure.
Verifies whether the content of the allocation is legit, then builds a model based on the provided allocation.
-
The extra_verifier argument is an additional optional verifier for the buffer. By default, we always check with tflite::VerifyModelBuffer. If extra_verifier is supplied, the buffer is checked against the extra_verifier after the check against tflite::VerifyModelBuilder. Ownership of the allocation is passed to the model, but the caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure.
Verifies whether the content of the buffer is legit, then builds a model based on the pre-loaded flatbuffer.
-
The extra_verifier argument is an additional optional verifier for the buffer. By default, we always check with tflite::VerifyModelBuffer. If extra_verifier is supplied, the buffer is checked against the extra_verifier after the check against tflite::VerifyModelBuilder. The caller retains ownership of the buffer and should keep it alive until the returned object is destroyed. Caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure.
Verifies whether the content of the file is legit, then builds a model based on the file.
-
The extra_verifier argument is an additional optional verifier for the file contents. By default, we always check with tflite::VerifyModelBuffer. If extra_verifier is supplied, the file contents is also checked against the extra_verifier after the check against tflite::VerifyModelBuilder. Caller retains ownership of error_reporter and must ensure its lifetime is longer than the FlatBufferModel instance. Returns a nullptr in case of failure.
-
-
-
Public functions
-
-
CheckModelIdentifier
-
bool CheckModelIdentifier() const
-
-
Returns true if the model identifier is correct (otherwise false and reports an error).
Any delegates added with AddDelegate will be applied to the Interpreter generated by operator(), in the order that they were added.
-
(The delegate parameter passed to AddDelegate should be non-null, otherwise an error will be reported, and the call to AddDelegate will have no other effect.) The lifetime of the delegate must be at least as long as the lifetime of any Interpreter generated by this InterpreterBuilder.
Builds an interpreter given only the raw flatbuffer Model object (instead of a FlatBufferModel).
-
Mostly used for testing. If error_reporter is null, then DefaultErrorReporter() is used. options object is copied during construction. So caller can release it
The capacity headroom of tensors_ vector before calling ops' prepare and invoke function.
-
In these functions, it's guaranteed allocating up to kTensorsCapacityHeadroom more tensors won't invalidate pointers to existing tensors.
-
-
-
-
kTensorsReservedCapacity
-
constexpr int kTensorsReservedCapacity = 128
-
-
-
Friend classes
-
-
tflite::impl::InterpreterBuilder
-
friend class tflite::impl::InterpreterBuilder
-
-
-
Public functions
-
-
AddProfiler
-
void AddProfiler(
- Profiler *profiler
-)
-
-
\warning This is an experimental API and subject to change.
-
\n Adds the profiler to tracing execution. The caller retains ownership of the profiler and must ensure its validity. nullptr profiler will be ignored.
\warning This is an experimental API and subject to change.
-
\n Adds the profiler to tracing execution. Transfers ownership of the profiler to the interpreter. nullptr profiler will be ignored.
-
-
-
-
AllocateTensors
-
TfLiteStatus AllocateTensors()
-
-
Update allocations for all tensors.
-
This will redim dependent tensors using the input tensor dimensionality as given. This is relatively expensive. This must be called after the interpreter has been created and before running inference (and accessing tensor buffers), and must be called again if (and only if) an input tensor is resized. Returns status of success or failure. Will fail if any of the ops in the model (other than those which were rewritten by delegates, if any) are not supported by the Interpreter's OpResolver.
\warning This is an experimental API and subject to change.
-
\n Apply InterpreterOptions which tunes behavior of the interpreter.
-
-
-
-
Cancel
-
TfLiteStatus Cancel()
-
-
\warning This is an experimental API and subject to change.
-
\n Attempts to cancel in flight invocation if any. This will not affect Invokes that happends after the cancellation. Non blocking. Thread safe. Returns kTfLiteError if cancellation is not enabled, otherwise returns kTfLiteOk.
-
-
-
-
EnsureTensorDataIsReadable
-
TfLiteStatus EnsureTensorDataIsReadable(
- int tensor_index
-)
-
-
\warning This is an experimental API and subject to change.
-
\n Ensure the data in tensor.data is readable. In case delegate is used, it might require to copy the data from delegate buffer to raw memory.
-
-
-
-
GetAllowFp16PrecisionForFp32
-
bool GetAllowFp16PrecisionForFp32() const
-
-
\warning Experimental interface, subject to change.
\warning Experimental interface, subject to change.
-
\n Returns a pointer to the AsyncSignatureRunner instance to run the part of the graph identified by a SignatureDef. The nullptr is returned if the given signature key is not valid. The async delegate should be applied before calling this function.
\warning Experimental interface, subject to change.
-
\n Returns a pointer to the SignatureRunner instance to run the part of the graph identified by a SignatureDef. The nullptr is returned if the given signature key is not valid. If you need to specify delegates, you have to do that before calling this function. This function will additionally apply default delegates. Thus, applying delegates after that might lead to undesirable behaviors. Note, the pointed instance has lifetime same as the Interpreter object and the SignatureRunner class is not thread-safe.
-
-
-
-
GetSubgraphIndexFromSignature
-
int GetSubgraphIndexFromSignature(
- const char *signature_key
-) const
-
-
\warning Experimental interface, subject to change.
-
\n Return the subgraph index that corresponds to a SignatureDef, defined by 'signature_key'. If invalid name passed, -1 will be returned.
Invoke the interpreter (run the whole graph in dependency order).
-
NOTE: It is possible that the interpreter is not in a ready state to evaluate (i.e. if a ResizeTensor() has been performed without an AllocateTensors(). Returns status of success or failure.
Allow a delegate to look at the graph and modify the graph to handle parts of the graph themselves.
-
After this is called, the graph may contain new nodes that replace 1 more nodes. 'delegate' must outlive the interpreter. Returns one of the following status codes:
-
kTfLiteOk: Success.
-
kTfLiteDelegateError: Delegation failed due to an error in the delegate, or the delegate parameter was null. The Interpreter has been restored to its pre-delegation state. NOTE: This undoes all delegates previously applied to the Interpreter.
-
kTfLiteApplicationError : Delegation failed to be applied due to the incompatibility with the TfLite runtime, e.g., the model graph is already immutable when applying the delegate. However, the interpreter could still be invoked.
-
kTfLiteUnresolvedOps: Delegation failed because the model has an operator that cannot be resolved. This can happen when the op is not registered or built with the TF Lite framework.
-
kTfLiteError: Unexpected/runtime failure. \n \warning This is an experimental API and subject to change. \n
TfLiteDelegate is a C structure, so it has no virtual destructor. The default deleter of the unique_ptr does not know how to delete C++ objects deriving from TfLiteDelegate.
Retrieve an operator's description of its work, for profiling purposes.
-
-
-
-
ReleaseNonPersistentMemory
-
TfLiteStatus ReleaseNonPersistentMemory()
-
-
\warning Experimental interface, subject to change.
-
\n This releases memory held by non-persistent tensors. It does NOT re-perform memory planning. AllocateTensors needs to be called before next invocation.
-
-
-
-
ResetVariableTensors
-
TfLiteStatus ResetVariableTensors()
-
-
\warning This is an experimental API and subject to change.
-
\n Reset all variable tensors to the default value. If a variable tensor doesn't have a buffer, reset it to zero. TODO(b/115961645): Implement - If a variable tensor has a buffer, reset it to the value of the buffer.
-
-
-
-
ResizeInputTensor
-
TfLiteStatus ResizeInputTensor(
- int tensor_index,
- const std::vector< int > & dims
-)
-
-
Change the dimensionality of a given tensor.
-
Note, this is only acceptable for tensor indices that are inputs or variables. Returns status of failure or success. Note that this doesn't actually resize any existing buffers. A call to AllocateTensors() is required to change the tensor input buffer.
-
-
-
-
ResizeInputTensorStrict
-
TfLiteStatus ResizeInputTensorStrict(
- int tensor_index,
- const std::vector< int > & dims
-)
-
-
Change the dimensionality of a given tensor.
-
This is only acceptable for tensor indices that are inputs or variables. Only unknown dimensions can be resized with this function. Unknown dimensions are indicated as -1 in the dims_signature attribute of a TfLiteTensor. Returns status of failure or success. Note that this doesn't actually resize any existing buffers. A call to AllocateTensors() is required to change the tensor input buffer.
\warning This is an experimental API and subject to change.
-
\n Set if buffer handle output is allowed.
-
When using hardware delegation, Interpreter will make the data of output tensors available in tensor->data by default. If the application can consume the buffer handle directly (e.g. reading output from OpenGL texture), it can set this flag to false, so Interpreter won't copy the data from buffer handle to CPU memory.
Allow float16 precision for FP32 calculation when possible.
-
Default: not allow.
-
WARNING: This API is deprecated: prefer controlling this via delegate options, e.g. `tflite::StatefulNnApiDelegate::Options::allow_fp16' or TfLiteGpuDelegateOptionsV2::is_precision_loss_allowed. This method will be removed in a future release.
\warning This is an experimental API and subject to change.
-
\n Set the delegate buffer handle to a tensor. It can be called in the following cases:
-
Set the buffer handle to a tensor that's not being written by a delegate. For example, feeding an OpenGL texture as the input of the inference graph.
-
Set the buffer handle to a tensor that uses the same delegate. For example, set an OpenGL texture as the output of inference, while the node which produces output is an OpenGL delegate node.
\warning This is an experimental API and subject to change.
-
\n Sets the cancellation function pointer in order to cancel a request in the middle of a call to Invoke(). The interpreter queries this function during inference, between op invocations; when it returns true, the interpreter will abort execution and return kTfLiteError. The data parameter contains any data used by the cancellation function, and if non-null, remains owned by the caller.
Assigns (or reassigns) a custom memory allocation for the given tensor.
-
flags is a bitmask, see TfLiteCustomAllocationFlags. The runtime does NOT take ownership of the underlying memory.
-
NOTE: User needs to call AllocateTensors() after this. Invalid/insufficient buffers will cause an error during AllocateTensors or Invoke (in case of dynamic shapes in the graph).
-
Parameters should satisfy the following conditions:
-
tensor->allocation_type == kTfLiteArenaRw or kTfLiteArenaRwPersistent In general, this is true for I/O tensors & variable tensors.
-
allocation->data has the appropriate permissions for runtime access (Read-only for inputs, Read-Write for others), and outlives Interpreter.
-
allocation->bytes >= tensor->bytes. This condition is checked again if any tensors are resized.
-
allocation->data should be aligned to kDefaultTensorAlignment defined in lite/util.h. (Currently 64 bytes) This check is skipped if kTfLiteCustomAllocationFlagsSkipAlignCheck is set through flags. \warning This is an experimental API and subject to change. \n
Set the number of threads available to the interpreter.
-
NOTE: num_threads should be >= -1. Setting num_threads to 0 has the effect to disable multithreading, which is equivalent to setting num_threads to 1. If set to the value -1, the number of threads used will be implementation-defined and platform-dependent.
-
As TfLite interpreter could internally apply a TfLite delegate by default (i.e. XNNPACK), the number of threads that are available to the default delegate should be set via InterpreterBuilder APIs as follows:
WARNING: This API is deprecated: prefer using InterpreterBuilder::SetNumThreads, as documented above.
-
-
-
-
SetProfiler
-
void SetProfiler(
- Profiler *profiler
-)
-
-
\warning This is an experimental API and subject to change.
-
\n Sets the profiler to tracing execution. The caller retains ownership of the profiler and must ensure its validity. Previously registered profilers will be unregistered. If profiler is nullptr, all previously installed profilers will be removed.
\warning This is an experimental API and subject to change.
-
\n Same as SetProfiler except this interpreter takes ownership of the provided profiler. Previously registered profilers will be unregistered. If profiler is nullptr, all previously installed profilers will be removed.
\warning Experimental interface, subject to change.
-
\n Returns the mapping of inputs to tensor index in the signature specified through 'signature_key'. If invalid name passed, an empty list will be returned.
\warning Experimental interface, subject to change.
-
\n Returns the mapping of outputs to tensor index in the signature specified through 'signature_key'. If invalid name passed, an empty list will be returned.
-
-
-
-
tensor
-
TfLiteTensor * tensor(
- int tensor_index
-)
-
-
Get a mutable tensor data structure.
-
-
-
-
tensor
-
const TfLiteTensor * tensor(
- int tensor_index
-) const
-
-
Get an immutable tensor data structure.
-
-
-
-
tensors_size
-
size_t tensors_size() const
-
-
Return the number of tensors in the model.
-
-
-
-
typed_input_tensor
-
T * typed_input_tensor(
- int index
-)
-
-
Return a mutable pointer into the data of a given input tensor.
-
The given index must be between 0 and inputs().size().
-
-
-
-
typed_input_tensor
-
const T * typed_input_tensor(
- int index
-) const
-
-
Return an immutable pointer into the data of a given input tensor.
-
The given index must be between 0 and inputs().size().
-
-
-
-
typed_output_tensor
-
T * typed_output_tensor(
- int index
-)
-
-
Return a mutable pointer into the data of a given output tensor.
-
The given index must be between 0 and outputs().size().
-
-
-
-
typed_output_tensor
-
const T * typed_output_tensor(
- int index
-) const
-
-
Return an immutable pointer into the data of a given output tensor.
-
The given index must be between 0 and outputs().size().
-
-
-
-
typed_tensor
-
T * typed_tensor(
- int tensor_index
-)
-
-
Perform a checked cast to the appropriate tensor type (mutable pointer version).
-
-
-
-
typed_tensor
-
const T * typed_tensor(
- int tensor_index
-) const
-
-
Perform a checked cast to the appropriate tensor type (immutable pointer version).
Registers all operator versions supported by another MutableOpResolver.
-
Replaces any previous registrations for the same operator versions, except that registrations made with AddBuiltin or AddCustom always take precedence over registrations made with ChainOpResolver.
-
-
-
AddBuiltin
-
void AddBuiltin(
- tflite::BuiltinOperator op,
- const TfLiteRegistration *registration,
- int version
-)
-
-
Registers the specified version of the specified builtin operator op.
-
Replaces any previous registration for the same operator version.
-
-
-
AddBuiltin
-
void AddBuiltin(
- tflite::BuiltinOperator op,
- const TfLiteRegistration *registration,
- int min_version,
- int max_version
-)
-
-
Registers the specified version range (versions min_version to max_version, inclusive) of the specified builtin operator op.
-
Replaces any previous registration for the same operator version.
-
-
-
AddCustom
-
void AddCustom(
- const char *name,
- const TfLiteRegistration *registration,
- int version
-)
-
-
Registers the specified version of the specified builtin operator op.
-
Replaces any previous registration for the same operator version.
-
-
-
AddCustom
-
void AddCustom(
- const char *name,
- const TfLiteRegistration *registration,
- int min_version,
- int max_version
-)
-
-
Registers the specified version range (versions min_version to max_version, inclusive) of the specified custom operator name.
-
Replaces any previous registration for the same operator version.
-
-
-
FindOp
-
virtual const TfLiteRegistration * FindOp(
- tflite::BuiltinOperator op,
- int version
-) const override
-
-
Finds the op registration for a builtin operator by enum code.
-
-
-
FindOp
-
virtual const TfLiteRegistration * FindOp(
- const char *op,
- int version
-) const override
-
-
Finds the op registration of a custom operator by op name.
-
-
-
GetDelegateCreators
-
virtual OpResolver::TfLiteDelegateCreators GetDelegateCreators() const final
-
-
-
GetOpaqueDelegateCreators
-
virtual OpResolver::TfLiteOpaqueDelegateCreators GetOpaqueDelegateCreators() const final
Registers all operator versions supported by another OpResolver, except any already registered in this MutableOpResolver.
-
other must point to an OpResolver whose lifetime is at least as long as the lifetime of the MutableOpResolver pointed to by this. The OpResolver pointed to by other should not be modified during the lifetime of this MutableOpResolver.
This provides a few C++ helpers that are useful for manipulating C structures in C++.
-
Main abstraction controlling the tflite interpreter. Do NOT include this file directly, instead include third_party/tensorflow/lite/interpreter.h See third_party/tensorflow/lite/c/common.h for the API for defining operations (TfLiteRegistration).
-
Provides functionality to construct an interpreter for a model.
-
WARNING: Users of TensorFlow Lite should not include this file directly, but should instead include "third_party/tensorflow/lite/interpreter_builder.h". Only the TensorFlow Lite implementation itself should include this file directly.
-
Deserialization infrastructure for tflite. Provides functionality to go from a serialized tflite model in flatbuffer format to an in-memory representation of the model.
-
WARNING: Users of TensorFlow Lite should not include this file directly, but should instead include "third_party/tensorflow/lite/model_builder.h". Only the TensorFlow Lite implementation itself should include this file directly.
An interpreter for a graph of nodes that input and output from tensors.
-
Each node of the graph processes a set of input tensors and produces a set of output Tensors. All inputs/output tensors are referenced by index.
-
Usage:
-
-
-// Create model from file. Note that the model instance must outlive the
-// interpreter instance.
-auto model = tflite::FlatBufferModel::BuildFromFile(...);
-if (model == nullptr) {
- // Return error.
-}
-// Create an Interpreter with an InterpreterBuilder.
-std::unique_ptr interpreter;
-tflite::ops::builtin::BuiltinOpResolver resolver;
-if (InterpreterBuilder(*model, resolver)(&interpreter) != kTfLiteOk) {
- // Return failure.
-}
-if (interpreter->AllocateTensors() != kTfLiteOk) {
- // Return failure.
-}
-
-
-
auto input = interpreter->typed_tensor(0);
-for (int i = 0; i < input_size; i++) {
- input[i] = ...; interpreter->Invoke();
-
-
-
Note: For nearly all practical use cases, one should not directly construct an Interpreter object, but rather use the InterpreterBuilder.
-
\warning This class is not thread-safe. The client is responsible for ensuring serialized interaction to avoid data races and undefined behavior.
-
-
-
-
InterpreterBuilder
-
impl::InterpreterBuilder InterpreterBuilder
-
-
Build an interpreter capable of interpreting model.
-
-
-
model: A model whose lifetime must be at least as long as any interpreter(s) created by the builder. In principle multiple interpreters can be made from a single model.
-
op_resolver: An instance that implements the OpResolver interface, which maps custom op names and builtin op codes to op registrations. The lifetime of the provided op_resolver object must be at least as long as the InterpreterBuilder; unlike model and error_reporter, the op_resolver does not need to exist for the duration of any created Interpreter objects.
-
error_reporter: a functor that is called to report errors that handles printf var arg semantics. The lifetime of the error_reporter object must be greater than or equal to the Interpreter created by operator().
-
options_experimental: Options that can change behavior of interpreter. WARNING: this parameter is an experimental API and is subject to change.
-
-
-
Returns a kTfLiteOk when successful and sets interpreter to a valid Interpreter. Note: The user must ensure the lifetime of the model (and error reporter, if provided) is at least as long as interpreter's lifetime, and a single model instance may safely be used with multiple interpreters.
An RAII object that represents a read-only tflite model, copied from disk, or mmapped.
-
Summary
-
This uses flatbuffers as the serialization format.
-
NOTE: The current API requires that a FlatBufferModel instance be kept alive by the client as long as it is in use by any dependent Interpreter instances. As the FlatBufferModel instance is effectively immutable after creation, the client may safely use a single model with multiple dependent Interpreter instances, even across multiple threads (though note that each Interpreter instance is not thread-safe).
-
-
-using namespace tflite;
-StderrReporter error_reporter;
-auto model = FlatBufferModel::BuildFromFile("interesting_model.tflite",
- &error_reporter);
-MyOpResolver resolver; // You need to subclass OpResolver to provide
- // implementations.
-InterpreterBuilder builder(*model, resolver);
-std::unique_ptr interpreter;
-if(builder(&interpreter) == kTfLiteOk) {
- .. run model inference with interpreter
-}
-
-
-
OpResolver must be defined to provide your kernel implementations to the interpreter. This is environment specific and may consist of just the builtin ops, or some custom operators you defined to extend tflite.
Indicates that this object (class, method, etc) should be retained and not renamed when
- generating the SDK, but should be allowed to be stripped or renamed in end developer apps.
Wrapper for a native TensorFlow Lite Delegate.
-
-
If a delegate implementation holds additional resources or memory that should be explicitly
- freed, then best practice is to add a close() method to the implementation and have the
- client call that explicitly when the delegate instance is no longer in use. While this approach
- technically allows sharing of a single delegate instance across multiple interpreter instances,
- the delegate implementation must explicitly support this.
-
Returns a native handle to the TensorFlow Lite delegate implementation.
-
-
-
-
-
-
-
-
Inherited Methods
-
-
- From interface
- java.io.Closeable
-
-
-
-
-
-
-
-
- abstract
-
-
-
-
- void
-
-
-
-close()
-
-
-
-
-
-
-
-
-
-
- From interface
- java.lang.AutoCloseable
-
-
-
-
-
-
-
-
- abstract
-
-
-
-
- void
-
-
-
-close()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Public Methods
-
-
-
-
- public
-
-
-
-
- void
-
-close
-()
-
-
-
-
-
-
Closes the delegate and releases any resources associated with it.
-
-
In contrast to the method declared in the base Closeable interface, this method
- does not throw checked exceptions.
-
-
-
-
-
-
-
- public
-
-
- abstract
-
- long
-
-getNativeHandle
-()
-
-
-
-
-
-
Returns a native handle to the TensorFlow Lite delegate implementation.
-
-
Note: The Java Delegate maintains ownership of the native delegate instance, and
- must ensure its existence for the duration of usage with any InterpreterApi instance.
-
-
Note: the native delegate instance may not be created until the delegate has been attached
- to an interpreter, so this method should not be called until after an interpreter has been
- constructed with this delegate.
-
-
Returns
-
The native delegate handle. In C/C++, this should be a pointer to
- 'TfLiteOpaqueDelegate'.
-
Note for developers implementing this interface: Currently TF Lite in Google Play Services
- does not support external (developer-provided) delegates. Correspondingly, implementations of
- this method can expect to be called with RuntimeFlavor.APPLICATION.
-
Advanced: Set if buffer handle output is allowed.
-
-
When a Delegate supports hardware acceleration, the interpreter will make the data
- of output tensors available in the CPU-allocated tensor buffers by default. If the client can
- consume the buffer handle directly (e.g. reading output from OpenGL texture), it can set this
- flag to false, avoiding the copy of data to the CPU buffer. The delegate documentation should
- indicate whether this is supported and how it can be used.
-
-
WARNING: This is an experimental interface that is subject to change.
-
Advanced: Set if the interpreter is able to be cancelled.
-
-
Interpreters may have an experimental API setCancelled(boolean).
- If this interpreter is cancellable and such a method is invoked, a cancellation flag will be
- set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state
- until explicitly "uncancelled" by setCancelled(false).
-
Sets the number of threads to be used for ops that support multi-threading.
-
-
numThreads should be >= -1. Setting numThreads to 0 has the effect
- of disabling multithreading, which is equivalent to setting numThreads to 1. If
- unspecified, or set to the value -1, the number of threads used will be
- implementation-defined and platform-dependent.
-
Driver class to drive model inference with TensorFlow Lite.
-
-
Note: If you don't need access to any of the "experimental" API features below, prefer to use
- InterpreterApi and InterpreterFactory rather than using Interpreter directly.
-
-
A Interpreter encapsulates a pre-trained TensorFlow Lite model, in which operations
- are executed for model inference.
-
-
For example, if a model takes only one input and returns only one output:
-
-
String[] input = {"foo", "bar"}; // Input tensor shape is [2].
- String[][] output = new String[3][2]; // Output tensor shape is [3, 2].
- try (Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)) {
- interpreter.runForMultipleInputsOutputs(input, output);
- }
-
-
Note that there's a distinction between shape [] and shape[1]. For scalar string tensor
- outputs:
-
-
String[] input = {"foo"}; // Input tensor shape is [1].
- ByteBuffer outputBuffer = ByteBuffer.allocate(OUTPUT_BYTES_SIZE); // Output tensor shape is [].
- try (Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)) {
- interpreter.runForMultipleInputsOutputs(input, outputBuffer);
- }
- byte[] outputBytes = new byte[outputBuffer.remaining()];
- outputBuffer.get(outputBytes);
- // Below, the `charset` can be StandardCharsets.UTF_8.
- String output = new String(outputBytes, charset);
-
-
Orders of inputs and outputs are determined when converting TensorFlow model to TensorFlowLite
- model with Toco, as are the default shapes of the inputs.
-
-
When inputs are provided as (multi-dimensional) arrays, the corresponding input tensor(s) will
- be implicitly resized according to that array's shape. When inputs are provided as Buffer
- types, no implicit resizing is done; the caller must ensure that the Buffer byte size
- either matches that of the corresponding tensor, or that they first resize the tensor via resizeInput(int, int[]). Tensor shape and type information can be obtained via the Tensor class, available via getInputTensor(int) and getOutputTensor(int).
-
-
WARNING:Interpreter instances are not thread-safe. A Interpreter
- owns resources that must be explicitly freed by invoking close()
-
The TFLite library is built against NDK API 19. It may work for Android API levels below 19,
- but is not guaranteed.
-
Initializes an Interpreter with a ByteBuffer of a model file.
-
-
The ByteBuffer should not be modified after the construction of a Interpreter. The
- ByteBuffer can be either a MappedByteBuffer that memory-maps a model file, or a
- direct ByteBuffer of nativeOrder() that contains the bytes content of a model.
Initializes an Interpreter with a ByteBuffer of a model file and a set of
- custom Interpreter.Options.
-
-
The ByteBuffer should not be modified after the construction of an Interpreter. The ByteBuffer can be either a MappedByteBuffer that memory-maps
- a model file, or a direct ByteBuffer of nativeOrder() that contains the bytes content
- of a model.
Explicitly updates allocations for all tensors, if necessary.
-
-
This will propagate shapes and memory allocations for dependent tensors using the input
- tensor shape(s) as given.
-
-
Note: This call is *purely optional*. Tensor allocation will occur automatically during
- execution if any input tensors have been resized. This call is most useful in determining the
- shapes for any output tensors before executing the graph, e.g.,
-
-
Gets the Tensor associated with the provided output index.
-
-
Note: Output tensor details (e.g., shape) may not be fully populated until after inference
- is executed. If you need updated details *before* running inference (e.g., after resizing an
- input tensor, which may invalidate output tensor shapes), use allocateTensors() to
- explicitly trigger allocation and shape propagation. Note that, for graphs with output shapes
- that are dependent on input *values*, the output shape may not be fully determined until
- running inference.
-
-
Parameters
-
-
-
outputIndex
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getOutputTensorCount
-()
-
Gets the Tensor associated with the provided output name in specific signature method.
-
-
Note: Output tensor details (e.g., shape) may not be fully populated until after inference
- is executed. If you need updated details *before* running inference (e.g., after resizing an
- input tensor, which may invalidate output tensor shapes), use allocateTensors() to
- explicitly trigger allocation and shape propagation. Note that, for graphs with output shapes
- that are dependent on input *values*, the output shape may not be fully determined until
- running inference.
-
-
WARNING: This is an experimental API and subject to change.
-
-
Parameters
-
-
-
outputName
-
Output name in the signature.
-
-
-
signatureKey
-
Signature key identifying the SignatureDef, can be null if the model has
- one signature.
Resizes idx-th input of the native model to the given dims.
-
-
When `strict` is True, only unknown dimensions can be resized. Unknown dimensions are
- indicated as `-1` in the array returned by `Tensor.shapeSignature()`.
Runs model inference if the model takes only one input, and provides only one output.
-
-
Warning: The API is more efficient if a Buffer (preferably direct, but not required)
- is used as the input/output data type. Please consider using Buffer to feed and fetch
- primitive data for better performance. The following concrete Buffer types are
- supported:
-
-
-
ByteBuffer - compatible with any underlying primitive Tensor type.
-
FloatBuffer - compatible with float Tensors.
-
IntBuffer - compatible with int32 Tensors.
-
LongBuffer - compatible with int64 Tensors.
-
-
- Note that boolean types are only supported as arrays, not Buffers, or as scalar inputs.
-
-
Parameters
-
-
-
input
-
an array or multidimensional array, or a Buffer of primitive types
- including int, float, long, and byte. Buffer is the preferred way to pass large
- input data for primitive types, whereas string types require using the (multi-dimensional)
- array input path. When a Buffer is used, its content should remain unchanged until
- model inference is done, and the caller must ensure that the Buffer is at the
- appropriate read position. A null value is allowed only if the caller is using a
- Delegate that allows buffer handle interop, and such a buffer has been bound to the
- input Tensor.
-
-
-
output
-
a multidimensional array of output data, or a Buffer of primitive types
- including int, float, long, and byte. When a Buffer is used, the caller must ensure
- that it is set the appropriate write position. A null value is allowed, and is useful for
- certain cases, e.g., if the caller is using a Delegate that allows buffer handle
- interop, and such a buffer has been bound to the output Tensor (see also Interpreter.Options#setAllowBufferHandleOutput(boolean)),
- or if the graph has dynamically shaped outputs and the caller must query the output Tensor shape after inference has been invoked, fetching the data directly from the output
- tensor (via Tensor.asReadOnlyBuffer()).
Runs model inference if the model takes multiple inputs, or returns multiple outputs.
-
-
Warning: The API is more efficient if Buffers (preferably direct, but not required)
- are used as the input/output data types. Please consider using Buffer to feed and fetch
- primitive data for better performance. The following concrete Buffer types are
- supported:
-
-
-
ByteBuffer - compatible with any underlying primitive Tensor type.
-
FloatBuffer - compatible with float Tensors.
-
IntBuffer - compatible with int32 Tensors.
-
LongBuffer - compatible with int64 Tensors.
-
-
- Note that boolean types are only supported as arrays, not Buffers, or as scalar inputs.
-
-
Note: null values for invididual elements of inputs and outputs is
- allowed only if the caller is using a Delegate that allows buffer handle interop, and
- such a buffer has been bound to the corresponding input or output Tensor(s).
-
-
Parameters
-
-
-
inputs
-
an array of input data. The inputs should be in the same order as inputs of the
- model. Each input can be an array or multidimensional array, or a Buffer of
- primitive types including int, float, long, and byte. Buffer is the preferred way
- to pass large input data, whereas string types require using the (multi-dimensional) array
- input path. When Buffer is used, its content should remain unchanged until model
- inference is done, and the caller must ensure that the Buffer is at the appropriate
- read position.
-
-
-
outputs
-
a map mapping output indices to multidimensional arrays of output data or Buffers of primitive types including int, float, long, and byte. It only needs to keep
- entries for the outputs to be used. When a Buffer is used, the caller must ensure
- that it is set the appropriate write position. The map may be empty for cases where either
- buffer handles are used for output tensor data, or cases where the outputs are dynamically
- shaped and the caller must query the output Tensor shape after inference has been
- invoked, fetching the data directly from the output tensor (via Tensor.asReadOnlyBuffer()).
Same as runSignature(Map, Map, String) but doesn't require passing a signatureKey,
- assuming the model has one SignatureDef. If the model has more than one SignatureDef it will
- throw an exception.
-
-
WARNING: This is an experimental API and subject to change.
-
Runs model inference based on SignatureDef provided through signatureKey.
-
-
See run(Object, Object) for more details on the allowed input and output
- data types.
-
-
WARNING: This is an experimental API and subject to change.
-
-
Parameters
-
-
-
inputs
-
A map from input name in the SignatureDef to an input object.
-
-
-
outputs
-
A map from output name in SignatureDef to output data. This may be empty if the
- caller wishes to query the Tensor data directly after inference (e.g., if the
- output shape is dynamic, or output buffer handles are used).
Advanced: Interrupts inference in the middle of a call to run(Object, Object).
-
-
A cancellation flag will be set to true when this function gets called. The interpreter will
- check the flag between Op invocations, and if it's true, the interpreter will stop
- execution. The interpreter will remain a cancelled state until explicitly "uncancelled" by
- setCancelled(false).
-
-
WARNING: This is an experimental API and subject to change.
-
-
Parameters
-
-
-
cancelled
-
true to cancel inference in a best-effort way; false to
- resume.
- public static final enum
- InterpreterApi.Options.TfLiteRuntime
-
-
-
-
-
Enum to represent where to get the TensorFlow Lite runtime implementation from.
-
-
The difference between this class and the RuntimeFlavor class: This class specifies a
- preference which runtime to use, whereas RuntimeFlavor specifies which exact
- runtime is being used.
-
-
- public
- static
- final
-
-
- InterpreterApi.Options.TfLiteRuntime
-
-
-FROM_APPLICATION_ONLY
-
-
-
-
-
Use a TF Lite runtime implementation that is linked into the application. If there is no
- suitable TF Lite runtime implementation linked into the application, then attempting to
- create an InterpreterApi instance with this TfLiteRuntime setting will throw an
- IllegalStateException exception (even if the OS or system services could provide a TF Lite
- runtime implementation).
-
-
This is the default setting. This setting is also appropriate for apps that must run on
- systems that don't provide a TF Lite runtime implementation.
-
-
-
-
-
-
-
- public
- static
- final
-
-
- InterpreterApi.Options.TfLiteRuntime
-
-
-FROM_SYSTEM_ONLY
-
-
-
-
-
Use a TF Lite runtime implementation provided by the OS or system services. This will be
- obtained from a system library / shared object / service, such as Google Play Services. It
- may be newer than the version linked into the application (if any). If there is no suitable
- TF Lite runtime implementation provided by the system, then attempting to create an
- InterpreterApi instance with this TfLiteRuntime setting will throw an IllegalStateException
- exception (even if there is a TF Lite runtime implementation linked into the application).
-
-
This setting is appropriate for code that will use a system-provided TF Lite runtime,
- which can reduce app binary size and can be updated more frequently.
-
-
-
-
-
-
-
- public
- static
- final
-
-
- InterpreterApi.Options.TfLiteRuntime
-
-
-PREFER_SYSTEM_OVER_APPLICATION
-
-
-
-
-
Use a system-provided TF Lite runtime implementation, if any, otherwise use the TF Lite
- runtime implementation linked into the application, if any. If no suitable TF Lite runtime
- can be found in any location, then attempting to create an InterpreterApi instance with
- this TFLiteRuntime setting will throw an IllegalStateException. If there is both a suitable
- TF Lite runtime linked into the application and also a suitable TF Lite runtime provided by
- the system, the one provided by the system will be used.
-
-
This setting is suitable for use in code that doesn't care where the TF Lite runtime is
- coming from (e.g. middleware layers).
-
Returns the list of delegates intended to be applied during interpreter creation that have
- been registered via addDelegate.
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getNumThreads
-()
-
-
-
-
-
-
Returns the number of threads to be used for ops that support multi-threading.
-
-
numThreads should be >= -1. Values of 0 (or 1) disable multithreading.
- Default value is -1: the number of threads used will be implementation-defined and
- platform-dependent.
-
Advanced: Returns whether the interpreter is able to be cancelled.
-
-
Interpreters may have an experimental API setCancelled(boolean).
- If this interpreter is cancellable and such a method is invoked, a cancellation flag will be
- set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state
- until explicitly "uncancelled" by setCancelled(false).
-
Advanced: Set if the interpreter is able to be cancelled.
-
-
Interpreters may have an experimental API setCancelled(boolean).
- If this interpreter is cancellable and such a method is invoked, a cancellation flag will be
- set to true. The interpreter will check the flag between Op invocations, and if it's true, the interpreter will stop execution. The interpreter will remain a cancelled state
- until explicitly "uncancelled" by setCancelled(false).
-
Sets the number of threads to be used for ops that support multi-threading.
-
-
numThreads should be >= -1. Setting numThreads to 0 has the effect
- of disabling multithreading, which is equivalent to setting numThreads to 1. If
- unspecified, or set to the value -1, the number of threads used will be
- implementation-defined and platform-dependent.
-
String[] input = {"foo", "bar"}; // Input tensor shape is [2].
- String[][] output = new String[3][2]; // Output tensor shape is [3, 2].
- try (InterpreterApi interpreter =
- new InterpreterApi.create(file_of_a_tensorflowlite_model)) {
- interpreter.runForMultipleInputsOutputs(input, output);
- }
-
-
Note that there's a distinction between shape [] and shape[1]. For scalar string tensor
- outputs:
-
-
String[] input = {"foo"}; // Input tensor shape is [1].
- ByteBuffer outputBuffer = ByteBuffer.allocate(OUTPUT_BYTES_SIZE); // Output tensor shape is [].
- try (Interpreter interpreter = new Interpreter(file_of_a_tensorflowlite_model)) {
- interpreter.runForMultipleInputsOutputs(input, outputBuffer);
- }
- byte[] outputBytes = new byte[outputBuffer.remaining()];
- outputBuffer.get(outputBytes);
- // Below, the `charset` can be StandardCharsets.UTF_8.
- String output = new String(outputBytes, charset);
-
-
Orders of inputs and outputs are determined when converting TensorFlow model to TensorFlowLite
- model with Toco, as are the default shapes of the inputs.
-
-
When inputs are provided as (multi-dimensional) arrays, the corresponding input tensor(s) will
- be implicitly resized according to that array's shape. When inputs are provided as Buffer types, no implicit resizing is done; the caller must ensure that the Buffer byte size either matches that of the corresponding tensor, or that they first
- resize the tensor via resizeInput(int, int[]). Tensor shape and type information can be
- obtained via the Tensor class, available via getInputTensor(int) and getOutputTensor(int).
-
-
WARNING:InterpreterApi instances are not thread-safe.
-
-
WARNING:An InterpreterApi instance owns resources that must be
- explicitly freed by invoking close()
-
The TFLite library is built against NDK API 19. It may work for Android API levels below 19,
- but is not guaranteed.
-
Explicitly updates allocations for all tensors, if necessary.
-
-
This will propagate shapes and memory allocations for dependent tensors using the input
- tensor shape(s) as given.
-
-
Note: This call is *purely optional*. Tensor allocation will occur automatically during
- execution if any input tensors have been resized. This call is most useful in determining the
- shapes for any output tensors before executing the graph, e.g.,
-
-
Constructs an InterpreterApi instance, using the specified model and options. The model
- will be read from a ByteBuffer.
-
-
Parameters
-
-
-
byteBuffer
-
A pre-trained TF Lite model, in binary serialized form. The ByteBuffer should
- not be modified after the construction of an InterpreterApi instance. The ByteBuffer can be either a MappedByteBuffer that memory-maps a model file, or a
- direct ByteBuffer of nativeOrder() that contains the bytes content of a model.
-
-
-
options
-
A set of options for customizing interpreter behavior.
Gets the Tensor associated with the provided output index.
-
-
Note: Output tensor details (e.g., shape) may not be fully populated until after inference
- is executed. If you need updated details *before* running inference (e.g., after resizing an
- input tensor, which may invalidate output tensor shapes), use allocateTensors() to
- explicitly trigger allocation and shape propagation. Note that, for graphs with output shapes
- that are dependent on input *values*, the output shape may not be fully determined until
- running inference.
Resizes idx-th input of the native model to the given dims.
-
-
When `strict` is True, only unknown dimensions can be resized. Unknown dimensions are
- indicated as `-1` in the array returned by `Tensor.shapeSignature()`.
if idx is negative or is not smaller than the number
- of model inputs; or if error occurs when resizing the idx-th input. Additionally, the error
- occurs when attempting to resize a tensor with fixed dimensions when `strict` is True.
-
Runs model inference if the model takes only one input, and provides only one output.
-
-
Warning: The API is more efficient if a Buffer (preferably direct, but not required)
- is used as the input/output data type. Please consider using Buffer to feed and fetch
- primitive data for better performance. The following concrete Buffer types are
- supported:
-
-
-
ByteBuffer - compatible with any underlying primitive Tensor type.
-
FloatBuffer - compatible with float Tensors.
-
IntBuffer - compatible with int32 Tensors.
-
LongBuffer - compatible with int64 Tensors.
-
-
- Note that boolean types are only supported as arrays, not Buffers, or as scalar inputs.
-
-
Parameters
-
-
-
input
-
an array or multidimensional array, or a Buffer of primitive types
- including int, float, long, and byte. Buffer is the preferred way to pass large
- input data for primitive types, whereas string types require using the (multi-dimensional)
- array input path. When a Buffer is used, its content should remain unchanged until
- model inference is done, and the caller must ensure that the Buffer is at the
- appropriate read position. A null value is allowed only if the caller is using a
- Delegate that allows buffer handle interop, and such a buffer has been bound to the
- input Tensor.
-
-
-
output
-
a multidimensional array of output data, or a Buffer of primitive types
- including int, float, long, and byte. When a Buffer is used, the caller must ensure
- that it is set the appropriate write position. A null value is allowed, and is useful for
- certain cases, e.g., if the caller is using a Delegate that allows buffer handle
- interop, and such a buffer has been bound to the output Tensor (see also Interpreter.Options#setAllowBufferHandleOutput(boolean)),
- or if the graph has dynamically shaped outputs and the caller must query the output Tensor shape after inference has been invoked, fetching the data directly from the output
- tensor (via Tensor.asReadOnlyBuffer()).
Runs model inference if the model takes multiple inputs, or returns multiple outputs.
-
-
Warning: The API is more efficient if Buffers (preferably direct, but not required)
- are used as the input/output data types. Please consider using Buffer to feed and fetch
- primitive data for better performance. The following concrete Buffer types are
- supported:
-
-
-
ByteBuffer - compatible with any underlying primitive Tensor type.
-
FloatBuffer - compatible with float Tensors.
-
IntBuffer - compatible with int32 Tensors.
-
LongBuffer - compatible with int64 Tensors.
-
-
- Note that boolean types are only supported as arrays, not Buffers, or as scalar inputs.
-
-
Note: null values for invididual elements of inputs and outputs is
- allowed only if the caller is using a Delegate that allows buffer handle interop, and
- such a buffer has been bound to the corresponding input or output Tensor(s).
-
-
Parameters
-
-
-
inputs
-
an array of input data. The inputs should be in the same order as inputs of the
- model. Each input can be an array or multidimensional array, or a Buffer of
- primitive types including int, float, long, and byte. Buffer is the preferred way
- to pass large input data, whereas string types require using the (multi-dimensional) array
- input path. When Buffer is used, its content should remain unchanged until model
- inference is done, and the caller must ensure that the Buffer is at the appropriate
- read position.
-
-
-
outputs
-
a map mapping output indices to multidimensional arrays of output data or Buffers of primitive types including int, float, long, and byte. It only needs to keep
- entries for the outputs to be used. When a Buffer is used, the caller must ensure
- that it is set the appropriate write position. The map may be empty for cases where either
- buffer handles are used for output tensor data, or cases where the outputs are dynamically
- shaped and the caller must query the output Tensor shape after inference has been
- invoked, fetching the data directly from the output tensor (via Tensor.asReadOnlyBuffer()).
Constructs an InterpreterApi instance, using the specified model and options. The model
- will be read from a ByteBuffer.
-
-
Parameters
-
-
-
byteBuffer
-
A pre-trained TF Lite model, in binary serialized form. The ByteBuffer should
- not be modified after the construction of an InterpreterApi instance. The ByteBuffer can be either a MappedByteBuffer that memory-maps a model file, or a
- direct ByteBuffer of nativeOrder() that contains the bytes content of a model.
-
-
-
options
-
A set of options for customizing interpreter behavior.
Represents a TFLite runtime. In contrast to InterpreterApi.Options.TfLiteRuntime, this enum represents the
- actual runtime that is being used, whereas the latter represents a preference for which runtime
- should be used.
-
A typed multi-dimensional array used in Tensorflow Lite.
-
-
The native handle of a Tensor is managed by NativeInterpreterWrapper, and does
- not needed to be closed by the client. However, once the NativeInterpreterWrapper has
- been closed, the tensor handle will be invalidated.
-
Returns a read-only ByteBuffer view of the tensor data.
-
-
In general, this method is most useful for obtaining a read-only view of output tensor data,
- *after* inference has been executed (e.g., via InterpreterApi.run(Object, Object)). In
- particular, some graphs have dynamically shaped outputs, which can make feeding a predefined
- output buffer to the interpreter awkward. Example usage:
-
-
interpreter.run(input, null);
- ByteBuffer outputBuffer = interpreter.getOutputTensor(0).asReadOnlyBuffer();
- // Copy or read from outputBuffer.
-
WARNING: If the tensor has not yet been allocated, e.g., before inference has been executed,
- the result is undefined. Note that the underlying tensor pointer may also change when the
- tensor is invalidated in any way (e.g., if inference is executed, or the graph is resized), so
- it is *not* safe to hold a reference to the returned buffer beyond immediate use directly
- following inference. Example *bad* usage:
-
-
ByteBuffer outputBuffer = interpreter.getOutputTensor(0).asReadOnlyBuffer();
- interpreter.run(input, null);
- // Copy or read from outputBuffer (which may now be invalid).
Returns the original shape of the Tensor,
- i.e., the sizes of each dimension - before any resizing was performed. Unknown dimensions are
- designated with a value of -1.
-
-
Returns
-
an array where the i-th element is the size of the i-th dimension of the tensor.
-
- public interface
- ValidatedAccelerationConfig
-
-
-
-
-
Interface specifying validated acceleration configuration. Developers should not implement this
- interface directly as it is only supported through the Acceleration service SDK.
-
The GPU delegate is not supported on all Android devices, due to differences in available
- OpenGL versions, driver features, and device resources. This class provides information on
- whether the GPU delegate is suitable for the current device.
-
-
This API is experimental and subject to change.
-
-
WARNING: the compatibilityList is constructed from testing done on a limited set of
- models. You should plan to verify that your own model(s) work.
-
-
Example usage:
-
-
Interpreter.Options options = new Interpreter.Options();
- try (CompatibilityList compatibilityList = new CompatibilityList()) {
- if (compatibilityList.isDelegateSupportedOnThisDevice()) {
- GpuDelegate.Options delegateOptions = compatibilityList.getBestOptionsForThisDevice();
- gpuDelegate = new GpuDelegate(delegateOptions):
- options.addDelegate(gpuDelegate);
- }
- }
- Interpreter interpreter = new Interpreter(modelBuffer, options);
-
Note: When calling Interpreter.Options.addDelegate() and Interpreter.run(),
- the caller must have an EGLContext in the current thread and Interpreter.run() must be called from the same EGLContext. If an EGLContext does
- not exist, the delegate will internally create one, but then the developer must ensure that
- Interpreter.run() is always called from the same thread in which Interpreter.Options.addDelegate() was called.
-
User is expected to call this method explicitly.
-
-
-
-
-
-
-
- public
-
-
-
-
- long
-
-getNativeHandle
-()
-
-
-
-
-
-
Returns a native handle to the TensorFlow Lite delegate implementation.
-
-
Note: The Java Delegate maintains ownership of the native delegate instance, and
- must ensure its existence for the duration of usage with any InterpreterApi instance.
-
-
Note: the native delegate instance may not be created until the delegate has been attached
- to an interpreter, so this method should not be called until after an interpreter has been
- constructed with this delegate.
-
-
Returns
-
The native delegate handle. In C/C++, this should be a pointer to
- 'TfLiteOpaqueDelegate'.
-
When `true` (default), the GPU may quantify tensors, downcast
- values, process in FP16. When `false`, computations are carried out in 32-bit floating
- point.
-
Enables serialization on the delegate. Note non-null serializationDir and modelToken are required for serialization.
-
-
WARNING: This is an experimental API and subject to change.
-
-
Parameters
-
-
-
serializationDir
-
The directory to use for storing data. Caller is responsible to
- ensure the model is not stored in a public directory. It's recommended to use Context.getCodeCacheDir() to provide a private location for the
- application on Android.
-
-
-
modelToken
-
The token to be used to identify the model. Caller is responsible to ensure
- the token is unique to the model graph and data.
-
Note for developers implementing this interface: Currently TF Lite in Google Play Services
- does not support external (developer-provided) delegates. Correspondingly, implementations of
- this method can expect to be called with RuntimeFlavor.APPLICATION.
-
- public static abstract class
- TensorAudio.TensorAudioFormat
-
-
-
-
-
Wraps a few constants describing the format of the incoming audio samples, namely number of
- channels and the sample rate. By default, channels is set to 1.
-
Defines a ring buffer and some utility functions to prepare the input audio samples.
-
-
It maintains a Ring Buffer to hold
- input audio data. Clients could feed input audio data via `load` methods and access the
- aggregated audio samples via `getTensorBuffer` method.
-
-
number of captured audio values whose size is channelCount * sampleCount. If
- there was no new data in the AudioRecord or an error occurred, this method will return 0.
Loads labels from the label file into a list of strings.
-
-
A legal label file is the plain text file whose contents are split into lines, and each line
- is an individual value. The file should be in assets of the context.
-
-
Parameters
-
-
-
context
-
The context holds assets.
-
-
-
filePath
-
The path of the label file, relative with assets directory.
Loads labels from the label file into a list of strings.
-
-
A legal label file is the plain text file whose contents are split into lines, and each line
- is an individual value. The empty lines will be ignored. The file should be in assets of the
- context.
-
-
Parameters
-
-
-
context
-
The context holds assets.
-
-
-
filePath
-
The path of the label file, relative with assets directory.
-
-
-
cs
-
Charset to use when decoding content of label file.
Loads a vocabulary file (a single-column text file) into a list of strings.
-
-
A vocabulary file is a single-column plain text file whose contents are split into lines,
- and each line is an individual value. The file should be in assets of the context.
-
-
Parameters
-
-
-
context
-
The context holds assets.
-
-
-
filePath
-
The path of the vocabulary file, relative with assets directory.
Loads vocabulary from an input stream of an opened vocabulary file (which is a single-column
- text file).
-
-
A vocabulary file is a single-column plain text file whose contents are split into lines,
- and each line is an individual value. The file should be in assets of the context.
TensorProcessor is a helper class for preprocessing and postprocessing tensors. It could
- transform a TensorBuffer to another by executing a chain of TensorOperator.
-
-
Dequantizes a TensorBuffer with given zeroPoint and scale.
-
-
Note: The data type of output tensor is always FLOAT32 except when the DequantizeOp is
- created effectively as an identity Op such as setting zeroPoint to 0 and scale to
- 1 (in this case, the output tensor is the same instance as input).
-
-
If both zeroPoint and scale are 0, the DequantizeOp will be bypassed,
- which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful
- when passing in the quantization parameters that are extracted directly from the TFLite model
- flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read
- as 0.
-
Initializes a NormalizeOp. When being called, it creates a new TensorBuffer, which
- satisfies:
-
-
- output = (input - mean) / stddev
-
-
In the following two cases, reset mean to 0 and stddev to 1 to bypass the
- normalization.
- 1. Both mean and {code stddev} are 0.
- 2. mean is 0 and {stddev} is Infinity.
-
-
Note: If mean is set to 0 and stddev is set to 1, no computation will
- happen, and original input will be directly returned in execution.
-
-
Note: The returned TensorBuffer is always a DataType.FLOAT32 tensor at
- present, except when the input is a DataType.UINT8 tensor, mean is set to 0 and
- stddev is set to 1, so that the original DataType.UINT8 tensor is returned.
Initializes a NormalizeOp. When being called, it creates a new TensorBuffer, which
- satisfies:
-
-
- // Pseudo code. [...][i] means a certain element whose channel id is i.
- output[...][i] = (input[...][i] - mean[i]) / stddev[i]
-
-
Note: If all values in mean are set to 0 and all stddev are set to 1, no
- computation will happen, and original input will be directly returned in execution.
-
-
Note: The returned TensorBuffer is always a DataType.FLOAT32 tensor at
- present, except that the input is a DataType.UINT8 tensor, all mean are set to
- 0 and all stddev are set to 1.
-
-
Parameters
-
-
-
mean
-
the mean values to be subtracted first for each channel.
-
-
-
stddev
-
the standard deviation values to divide then for each channel.
Quantizes a TensorBuffer with given zeroPoint and scale.
-
-
Note: QuantizeOp does not cast output to UINT8, but only performs the quantization
- math on top of input. The data type of output tensor is always FLOAT32 except that the Op
- is effectively an identity Op (in this case, the output tensor is the same instance as the
- input). To connect with quantized model, a CastOp is probably needed.
-
-
If both zeroPoint and scale are 0, the QuantizeOp will be bypassed,
- which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful
- when passing in the quantization parameters that are extracted directly from the TFLite model
- flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read
- as 0.
-
-
- public
- static
- final
-
-
- BoundingBoxUtil.Type
-
-
-BOUNDARIES
-
-
-
-
-
Represents the bounding box by using the combination of boundaries, {left, top, right,
- bottom}. The default order is {left, top, right, bottom}. Other orders can be indicated by an
- index array.
-
-
-
-
-
-
-
- public
- static
- final
-
-
- BoundingBoxUtil.Type
-
-
-CENTER
-
-
-
-
-
Represents the bounding box by using the center of the box, width and height. The default
- order is {center_x, center_y, width, height}. Other orders can be indicated by an index
- array.
-
-
-
-
-
-
-
- public
- static
- final
-
-
- BoundingBoxUtil.Type
-
-
-UPPER_LEFT
-
-
-
-
-
Represents the bounding box by using the upper_left corner, width and height. The default
- order is {upper_left_x, upper_left_y, width, height}. Other orders can be indicated by an
- index array.
-
Helper class for converting values that represents bounding boxes into rectangles.
-
-
The class provides a static function to create bounding boxes as RectF from different types of configurations.
-
-
Generally, a bounding box could be represented by 4 float values, but the values could be
- interpreted in many ways. We now support 3 BoundingBoxUtil.Type of configurations, and the order of
- elements in each type is configurable as well.
-
Creates a list of bounding boxes from a TensorBuffer which represents bounding boxes.
-
-
Parameters
-
-
-
tensor
-
holds the data representing some boxes.
-
-
-
valueIndex
-
denotes the order of the elements defined in each bounding box type. An empty
- index array represent the default order of each bounding box type. For example, to denote
- the default order of BOUNDARIES, {left, top, right, bottom}, the index should be {0, 1, 2,
- 3}. To denote the order {left, right, top, bottom}, the order should be {0, 2, 1, 3}.
-
The index array can be applied to all bounding box types to adjust the order of their
- corresponding underlying elements.
-
-
-
boundingBoxAxis
-
specifies the index of the dimension that represents bounding box. The
- size of that dimension is required to be 4. Index here starts from 0. For example, if the
- tensor has shape 4x10, the axis for bounding boxes is likely to be 0. Negative axis is also
- supported: -1 gives the last axis and -2 gives the second, .etc. theFor shape 10x4, the
- axis is likely to be 1 (or -1, equivalently).
A list of bounding boxes that the tensor represents. All dimensions except
- boundingBoxAxis will be collapsed with order kept. For example, given tensor with shape {1, 4, 10, 2} and boundingBoxAxis = 1, The result will be a list
- of 20 bounding boxes.
-
- public
- static
- final
-
-
- ColorSpaceType
-
-
-GRAYSCALE
-
-
-
-
-
Each pixel is a single element representing only the amount of light.
-
-
-
-
-
-
- public
- static
- final
-
-
- ColorSpaceType
-
-
-NV12
-
-
-
-
-
YUV420sp format, encoded as "YYYYYYYY UVUV".
-
-
-
-
-
-
- public
- static
- final
-
-
- ColorSpaceType
-
-
-NV21
-
-
-
-
-
YUV420sp format, encoded as "YYYYYYYY VUVU", the standard picture format on Android Camera1
- preview.
-
-
-
-
-
-
-
- public
- static
- final
-
-
- ColorSpaceType
-
-
-RGB
-
-
-
-
-
Each pixel has red, green, and blue color components.
-
-
-
-
-
-
- public
- static
- final
-
-
- ColorSpaceType
-
-
-YUV_420_888
-
-
-
-
-
YUV420 format corresponding to ImageFormat.YUV_420_888. The actual
- encoding format (i.e. NV12 / Nv21 / YV12 / YV21) depends on the implementation of the image.
-
-
ImageProcessor is a helper class for preprocessing and postprocessing TensorImage. It
- could transform a TensorImage to another by executing a chain of ImageOperator.
-
-
WARNING: Instances of an ImageProcessor are not thread-safe with updateNumberOfRotations(int). Updating the number of rotations and then processing images (using
- SequentialProcessor.process(T)) must be protected from concurrent access. It is recommended to create separate
- ImageProcessor instances for each thread. If multiple threads access a ImageProcessor concurrently, it must be synchronized externally.
WARNING:this method is not thread-safe. Updating the number of rotations and
- then processing images (using SequentialProcessor.process(T)) must be protected from concurrent access with
- additional synchronization.
-
- public
-
-
-
- synchronized
- void
-
-updateNumberOfRotations
-(int k, int occurrence)
-
-
-
-
-
-
Updates the number of rotations for the Rot90Op specified by occurrence in this
- ImageProcessor.
-
-
WARNING:this method is not thread-safe. Updating the number of rotations and
- then processing images (using SequentialProcessor.process(T)) must be protected from concurrent access with
- additional synchronization.
-
-
Parameters
-
-
-
k
-
the number of rotations
-
-
-
occurrence
-
the index of perticular Rot90Op in this ImageProcessor. For
- example, if the second Rot90Op needs to be updated, occurrence should be
- set to 1.
IMPORTANT: The returned TensorImage shares storage with mlImage, so do not
- modify the contained object in the TensorImage, as MlImage expects its
- contained data are immutable. Also, callers should use MlImage#getInternal()#acquire()
- and MlImage#release() to avoid the mlImage being released unexpectedly.
TensorImage is the wrapper class for Image object. When using image processing utils in
- TFLite.support library, it's common to convert image objects in variant types to TensorImage at
- first.
-
-
At present, only RGB images are supported, and the A channel is always ignored.
-
-
Details of data storage: a TensorImage object may have 2 potential sources of truth: a
- Bitmap or a TensorBuffer. TensorImage maintains the
- state and only converts one to the other when needed. A typical use case of TensorImage
- is to first load a Bitmap image, then process it using ImageProcessor, and finally get the underlying ByteBuffer of the TensorBuffer
- and feed it into the TFLite interpreter.
-
-
IMPORTANT: to achieve the best performance, TensorImage avoids copying data whenever
- it's possible. Therefore, it doesn't own its data. Callers should not modify data objects those
- are passed to load(Bitmap) or load(TensorBuffer, ColorSpaceType).
-
-
IMPORTANT: all methods are not proved thread-safe.
Note: the shape of a TensorImage is not fixed. It can be adjusted to the shape of
- the image being loaded to this TensorImage.
-
-
Parameters
-
-
-
dataType
-
the expected data type of the resulting TensorBuffer. The type is
- always fixed during the lifetime of the TensorImage. To convert the data type, use
- createFrom(TensorImage, DataType) to create a copy and convert data type at the
- same time.
Numeric casting and clamping will be applied if the stored data is not uint8.
-
-
Note that, the reliable way to get pixels from an ALPHA_8 Bitmap is to use copyPixelsToBuffer. Bitmap methods such as, `setPixels()` and `getPixels` do not work.
-
-
Important: it's only a reference. DO NOT MODIFY. We don't create a copy here for performance
- concern, but if modification is necessary, please make a copy.
-
-
Returns
-
a reference to a Bitmap in ARGB_8888 config ("A"
- channel is always opaque) or in ALPHA_8, depending on the ColorSpaceType of
- this TensorBuffer.
Returns a ByteBuffer representation of this TensorImage with the expected data
- type.
-
-
Numeric casting and clamping will be applied if the stored data is different from the data
- type of the TensorImage.
-
-
Important: it's only a reference. DO NOT MODIFY. We don't create a copy here for performance
- concern, but if modification is necessary, please make a copy.
-
-
It's essentially a short cut for getTensorBuffer().getBuffer().
-
-
Returns
-
a reference to a ByteBuffer which holds the image data
This method only works when the TensorImage is backed by an Image, meaning you need to first load an Image through
- load(Image).
-
-
Important: it's only a reference. DO NOT MODIFY. We don't create a copy here for performance
- concern, but if modification is necessary, please make a copy.
-
-
Returns
-
a reference to a Bitmap in ARGB_8888 config ("A"
- channel is always opaque) or in ALPHA_8, depending on the ColorSpaceType of
- this TensorBuffer.
Returns a TensorBuffer representation of this TensorImage with the expected
- data type.
-
-
Numeric casting and clamping will be applied if the stored data is different from the data
- type of the TensorImage.
-
-
Important: it's only a reference. DO NOT MODIFY. We don't create a copy here for performance
- concern, but if modification is necessary, please make a copy.
-
-
Returns
-
a reference to a TensorBuffer which holds the image data
Note: if the data type of buffer does not match that of this TensorImage,
- numeric casting and clamping will be applied when calling getTensorBuffer() and getBuffer().
-
-
Parameters
-
-
-
buffer
-
the TensorBuffer to be loaded. Its shape should be either (h, w, 3) or
- (1, h, w, 3) for RGB images, and either (h, w) or (1, h, w) for GRAYSCALE images
Important: when loading a bitmap, DO NOT MODIFY the bitmap from the caller side anymore. The
- TensorImage object will rely on the bitmap. It will probably modify the bitmap as well.
- In this method, we perform a zero-copy approach for that bitmap, by simply holding its
- reference. Use bitmap.copy(bitmap.getConfig(), true) to create a copy if necessary.
-
-
Note: to get the best performance, please load images in the same shape to avoid memory
- re-allocation.
Loads an int array as RGB pixels into this TensorImage, representing the pixels inside.
-
-
Note: numeric casting and clamping will be applied to convert the values into the data type
- of this TensorImage when calling getTensorBuffer() and getBuffer().
-
-
Parameters
-
-
-
pixels
-
the RGB pixels representing the image
-
-
-
shape
-
the shape of the image, should either in form (h, w, 3), or in form (1, h, w, 3)
Note: if the data type of buffer does not match that of this TensorImage,
- numeric casting and clamping will be applied when calling getTensorBuffer() and getBuffer().
The shape of the TensorBuffer will not be used to determine image height and width.
- Set image properties through ImageProperties.
-
-
Note: if the data type of buffer does not match that of this TensorImage,
- numeric casting and clamping will be applied when calling getTensorBuffer() and getBuffer().
As a computation unit for processing images, it could resize image to predefined size.
-
-
It will not stretch or compress the content of image. However, to fit the new size, it crops
- or pads pixels. When it crops image, it performs a center-crop; when it pads pixels, it performs
- a zero-padding.
Applies an operation on a T object, returning a T object.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
Public Constructors
-
-
-
-
- public
-
-
-
-
-
-
-Rot90Op
-()
-
-
-
-
-
-
Creates a Rot90 Op which will rotate image by 90 degree counter-clockwise.
-
-
-
-
-
-
- public
-
-
-
-
-
-
-Rot90Op
-(int k)
-
-
-
-
-
-
Creates a Rot90 Op which will rotate image by 90 degree for k times counter-clockwise.
-
-
Parameters
-
-
-
k
-
The number of times the image is rotated by 90 degrees. If it's positive, the image
- will be rotated counter-clockwise. If it's negative, the op will rotate image clockwise.
-
The conversion is based on OpenCV RGB to GRAY conversion
- https://docs.opencv.org/master/de/d25/imgproc_color_conversions.html#color_convert_rgb_gray
-
Category is a util class, contains a label, its display name, a float value as score, and the
- index of the label in the corresponding label file. Typically it's used as result of
- classification tasks.
-
the display name of the label, which may be translated for different
- locales. For exmaple, a label, "apple", may be translated into Spanish for display purpose,
- so that the displayName is "manzana".
-
-
-
score
-
the probability score of this label category
-
-
-
index
-
the index of the label in the corresponding label file
-
Maps an int value tensor to a list of string labels. It takes an array of strings as the
- dictionary. Example: if the given tensor is [3, 1, 0], and given labels is ["background",
- "apple", "banana", "cherry", "date"], the result will be ["date", "banana", "apple"].
-
-
Parameters
-
-
-
tensorBuffer
-
A tensor with index values. The values should be non-negative integers, and
- each value x will be converted to labels[x + offset]. If the tensor is
- given as a float TensorBuffer, values will be cast to integers. All values that are
- out of bound will map to empty string.
-
-
-
labels
-
A list of strings, used as a dictionary to look up. The index of the array
- element will be used as the key. To get better performance, use an object that implements
- RandomAccess, such as ArrayList.
-
-
-
offset
-
The offset value when look up int values in the labels.
TensorLabel is an util wrapper for TensorBuffers with meaningful labels on an axis.
-
-
For example, an image classification model may have an output tensor with shape as {1, 10},
- where 1 is the batch size and 10 is the number of categories. In fact, on the 2nd axis, we could
- label each sub-tensor with the name or description of each corresponding category. TensorLabel could help converting the plain Tensor in TensorBuffer into a map from
- predefined labels to sub-tensors. In this case, if provided 10 labels for the 2nd axis, TensorLabel could convert the original {1, 10} Tensor to a 10 element map, each value of which
- is Tensor in shape {} (scalar). Usage example:
-
-
- TensorBuffer outputTensor = ...;
- List<String> labels = FileUtil.loadLabels(context, labelFilePath);
- // labels the first axis with size greater than one
- TensorLabel labeled = new TensorLabel(labels, outputTensor);
- // If each sub-tensor has effectively size 1, we can directly get a float value
- Map<String, Float> probabilities = labeled.getMapWithFloatValue();
- // Or get sub-tensors, when each sub-tensor has elements more than 1
- Map<String, TensorBuffer> subTensors = labeled.getMapWithTensorBuffer();
-
-
Note: currently we only support tensor-to-map conversion for the first label with size greater
- than 1.
Creates a TensorLabel object which is able to label on the axes of multi-dimensional tensors.
-
-
Parameters
-
-
-
axisLabels
-
A map, whose key is axis id (starting from 0) and value is corresponding
- labels. Note: The size of labels should be same with the size of the tensor on that axis.
if any key in axisLabels is out of range (compared to
- the shape of tensorBuffer, or any value (labels) has different size with the tensorBuffer on the given dimension.
-
Creates a TensorLabel object which is able to label on one axis of multi-dimensional tensors.
-
-
Note: The labels are applied on the first axis whose size is larger than 1. For example, if
- the shape of the tensor is [1, 10, 3], the labels will be applied on axis 1 (id starting from
- 0), and size of axisLabels should be 10 as well.
-
-
Parameters
-
-
-
axisLabels
-
A list of labels, whose size should be same with the size of the tensor on
- the to-be-labeled axis.
The axis of label should be effectively the last axis (which means every sub tensor
- specified by this axis should have a flat size of 1), so that each labelled sub tensor could be
- converted into a float value score. Example: A TensorLabel with shape {2, 5, 3}
- and axis 2 is valid. If axis is 1 or 0, it cannot be converted into a Category.
-
-
Gets a map that maps label to float. Only allow the mapping on the first axis with size greater
- than 1, and the axis should be effectively the last axis (which means every sub tensor
- specified by this axis should have a flat size of 1).
-
-
Gets the map with a pair of the label and the corresponding TensorBuffer. Only allow the
- mapping on the first axis with size greater than 1 currently.
-
Some models contain a TFLite Metadata Flatbuffer, which records more information about what
- the model does and how to interprete the model. TFLite Metadata Flatbuffer can be generated using
- the TFLite
- Metadata schema file.
-
It is allowed to pass in a model FlatBuffer without TFLite metadata. However, invoking methods
- that read from TFLite metadata will cause runtime errors.
-
-
Similarly, it is allowed to pass in a model FlatBuffer without associated files. However,
- invoking methods that read the associated files will cause runtime errors.
-
-
Returns true if the minimum parser version required by the given metadata flatbuffer
- precedes or equals to the version of the metadata parser that this MetadataExtractor library is
- relying on.
Returns true if the model has metadata. Otherwise, returns false.
-
-
-
-
-
-
- public
-
- final
-
-
- boolean
-
-isMinimumParserVersionSatisfied
-()
-
-
-
-
-
-
Returns true if the minimum parser version required by the given metadata flatbuffer
- precedes or equals to the version of the metadata parser that this MetadataExtractor library is
- relying on. All fields in the metadata can be parsed correctly with this metadata extractor
- library in this case. Otherwise, it returns false.
-
-
For example, assume the underlying metadata parser version is 1.14.1,
-
-
-
it returns true, if the required minimum parser version is the same or older,
- such as 1.14.1 or 1.14.0. Null version precedes all numeric versions,
- because some metadata flatbuffers are generated before the first versioned release;
-
it returns false, if the required minimum parser version is newer, such as 1.14.2.
-
-
- public
- static
- final
-
-
- String
-
-VERSION
-
-
-
-
-
The version of the metadata parser that this metadata extractor library is depending on. The
- value should match the value of "Schema Semantic version" in metadata_schema.fbs.
-
Runs model inference on multiple inputs, and returns multiple outputs.
-
-
Parameters
-
-
-
inputs
-
an array of input data. The inputs should be in the same order as inputs of the
- model. Each input can be an array or multidimensional array, or a ByteBuffer of primitive types including int, float, long, and byte. ByteBuffer is the preferred way to pass large input data, whereas string types
- require using the (multi-dimensional) array input path. When ByteBuffer is
- used, its content should remain unchanged until model inference is done.
-
-
-
outputs
-
a map mapping output indices to multidimensional arrays of output data or ByteBuffers of primitive types including int, float, long, and byte. It only
- needs to keep entries for the outputs to be used.
-
Dynamic TensorBuffers will reallocate memory when loading arrays or data buffers of
- different buffer sizes. Here are some examples:
-
-
- // Creating a float dynamic TensorBuffer:
- TensorBuffer tensorBuffer = TensorBuffer.createDynamic(DataType.FLOAT32);
- // Loading a float array:
- float[] arr1 = new float[] {1, 2, 3};
- tensorBuffer.loadArray(arr, new int[] {arr1.length});
- // loading another float array:
- float[] arr2 = new float[] {1, 2, 3, 4, 5};
- tensorBuffer.loadArray(arr, new int[] {arr2.length});
- // loading a third float array with the same size as arr2, assuming shape doesn't change:
- float[] arr3 = new float[] {5, 4, 3, 2, 1};
- tensorBuffer.loadArray(arr);
- // loading a forth float array with different size as arr3 and omitting the shape will result
- // in error:
- float[] arr4 = new float[] {3, 2, 1};
- tensorBuffer.loadArray(arr); // Error: The size of byte buffer and the shape do not match.
-
Returns a float array of the values stored in this buffer. If the buffer is of different types
- than float, the values will be converted into float. For example, values in TensorBufferUint8 will be converted from uint8 to float.
-
Returns a float value at a given index. If the buffer is of different types than float, the
- value will be converted into float. For example, when reading a value from TensorBufferUint8, the value will be first read out as uint8, and then will be converted from
- uint8 to float.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- float v = tensorBuffer.getFloatValue(3);
-
Returns an int array of the values stored in this buffer. If the buffer is of different type
- than int, the values will be converted into int, and loss of precision may apply. For example,
- getting an int array from a TensorBufferFloat with values {400.32f, 23.04f}, the output
- is {400, 23}.
-
-
-
-
-
-
-
- public
-
-
- abstract
-
- int
-
-getIntValue
-(int absIndex)
-
-
-
-
-
-
Returns an int value at a given index. If the buffer is of different types than int, the value
- will be converted into int. For example, when reading a value from TensorBufferFloat,
- the value will be first read out as float, and then will be converted from float to int. Loss
- of precision may apply.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- int v = tensorBuffer.getIntValue(3);
- Note that v is converted from 3.0f to 3 as a result of type conversion.
-
-
-
Parameters
-
-
-
absIndex
-
The absolute index of the value to be read.
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int[]
-
-getShape
-()
-
-
-
-
-
-
Gets the current shape. (returning a copy here to avoid unexpected modification.)
Loads an int array into this buffer with specific shape. If the buffer is of different types
- than int, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading an int array with values {400,
- -23} into a TensorBufferUint8 , the values will be clamped to [0, 255] and then be
- casted to uint8 by {255, 0}.
Loads a float array into this buffer with specific shape. If the buffer is of different types
- than float, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading a float array into a TensorBufferUint8 with values {400.32f, -23.04f}, the values will be clamped to [0, 255] and
- then be casted to uint8 by {255, 0}.
Loads a float array into this buffer. If the buffer is of different types than float, the
- values will be converted into the buffer's type before being loaded into the buffer, and loss
- of precision may apply. For example, loading a float array into a TensorBufferUint8
- with values {400.32f, -23.04f}, the values will be clamped to [0, 255] and then be casted to
- uint8 by {255, 0}.
-
-
Using this method assumes that the shape of src is the same as the shape of this
- TensorBuffer. Thus the size of buffer (src.length) should always match
- the flat size of this TensorBuffer, for both fixed-size and dynamic TensorBuffer. Use loadArray(float[], int[]) if src has a different shape.
Loads an int array into this buffer. If the buffer is of different types than int, the values
- will be converted into the buffer's type before being loaded into the buffer, and loss of
- precision may apply. For example, loading an int array with values {400, -23} into a TensorBufferUint8 , the values will be clamped to [0, 255] and then be casted to uint8 by
- {255, 0}.
-
-
Using this method assumes that the shape of src is the same as the shape of this
- TensorBuffer. Thus the size of buffer (src.length) should always match
- the flat size of this TensorBuffer, for both fixed-size and dynamic TensorBuffer. Use loadArray(int[], int[]) if src has a different shape.
Loads a byte buffer into this TensorBuffer. Buffer size must match the flat size of
- this TensorBuffer.
-
-
Using this method assumes that the shape of buffer is the same as the shape of this
- TensorBuffer. Thus the size of buffer (buffer.limit()) should always
- match the flat size of this TensorBuffer, for both fixed-size and dynamic TensorBuffer. Use loadBuffer(ByteBuffer, int[]) if buffer has a different
- shape.
-
-
Important: The loaded buffer is a reference. DO NOT MODIFY. We don't create a copy here for
- performance concern, but if modification is necessary, please make a copy.
-
-
For the best performance, always load a direct ByteBuffer or a ByteBuffer
- backed by an array.
-
-
If the buffer is read-only, we adopt a copy-on-write strategy for performance.
Loads a byte buffer into this TensorBuffer with specific shape.
-
-
Important: The loaded buffer is a reference. DO NOT MODIFY. We don't create a copy here for
- performance concern, but if modification is necessary, please make a copy.
-
-
For the best performance, always load a direct ByteBuffer or a ByteBuffer
- backed by an array.
Returns a float array of the values stored in this buffer. If the buffer is of different types
- than float, the values will be converted into float. For example, values in TensorBufferUint8 will be converted from uint8 to float.
-
Returns a float value at a given index. If the buffer is of different types than float, the
- value will be converted into float. For example, when reading a value from TensorBufferUint8, the value will be first read out as uint8, and then will be converted from
- uint8 to float.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- float v = tensorBuffer.getFloatValue(3);
-
-
-
Parameters
-
-
-
absIndex
-
The absolute index of the value to be read.
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int[]
-
-getIntArray
-()
-
-
-
-
-
-
Returns an int array of the values stored in this buffer. If the buffer is of different type
- than int, the values will be converted into int, and loss of precision may apply. For example,
- getting an int array from a TensorBufferFloat with values {400.32f, 23.04f}, the output
- is {400, 23}.
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getIntValue
-(int absIndex)
-
-
-
-
-
-
Returns an int value at a given index. If the buffer is of different types than int, the value
- will be converted into int. For example, when reading a value from TensorBufferFloat,
- the value will be first read out as float, and then will be converted from float to int. Loss
- of precision may apply.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- int v = tensorBuffer.getIntValue(3);
- Note that v is converted from 3.0f to 3 as a result of type conversion.
-
-
-
Parameters
-
-
-
absIndex
-
The absolute index of the value to be read.
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getTypeSize
-()
-
-
-
-
-
-
Returns the number of bytes of a single element in the array. For example, a float buffer will
- return 4, and a byte buffer will return 1.
-
Loads an int array into this buffer with specific shape. If the buffer is of different types
- than int, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading an int array with values {400,
- -23} into a TensorBufferUint8 , the values will be clamped to [0, 255] and then be
- casted to uint8 by {255, 0}.
Loads a float array into this buffer with specific shape. If the buffer is of different types
- than float, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading a float array into a TensorBufferUint8 with values {400.32f, -23.04f}, the values will be clamped to [0, 255] and
- then be casted to uint8 by {255, 0}.
Returns a float array of the values stored in this buffer. If the buffer is of different types
- than float, the values will be converted into float. For example, values in TensorBufferUint8 will be converted from uint8 to float.
-
Returns a float value at a given index. If the buffer is of different types than float, the
- value will be converted into float. For example, when reading a value from TensorBufferUint8, the value will be first read out as uint8, and then will be converted from
- uint8 to float.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- float v = tensorBuffer.getFloatValue(3);
-
-
-
Parameters
-
-
-
index
-
The absolute index of the value to be read.
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int[]
-
-getIntArray
-()
-
-
-
-
-
-
Returns an int array of the values stored in this buffer. If the buffer is of different type
- than int, the values will be converted into int, and loss of precision may apply. For example,
- getting an int array from a TensorBufferFloat with values {400.32f, 23.04f}, the output
- is {400, 23}.
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getIntValue
-(int index)
-
-
-
-
-
-
Returns an int value at a given index. If the buffer is of different types than int, the value
- will be converted into int. For example, when reading a value from TensorBufferFloat,
- the value will be first read out as float, and then will be converted from float to int. Loss
- of precision may apply.
-
-
- For example, a TensorBuffer with shape {2, 3} that represents the following array,
- [[0.0f, 1.0f, 2.0f], [3.0f, 4.0f, 5.0f]].
-
- The fourth element (whose value is 3.0f) in the TensorBuffer can be retrieved by:
- int v = tensorBuffer.getIntValue(3);
- Note that v is converted from 3.0f to 3 as a result of type conversion.
-
-
-
Parameters
-
-
-
index
-
The absolute index of the value to be read.
-
-
-
-
-
-
-
-
-
-
- public
-
-
-
-
- int
-
-getTypeSize
-()
-
-
-
-
-
-
Returns the number of bytes of a single element in the array. For example, a float buffer will
- return 4, and a byte buffer will return 1.
-
Loads an int array into this buffer with specific shape. If the buffer is of different types
- than int, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading an int array with values {400,
- -23} into a TensorBufferUint8 , the values will be clamped to [0, 255] and then be
- casted to uint8 by {255, 0}.
Loads a float array into this buffer with specific shape. If the buffer is of different types
- than float, the values will be converted into the buffer's type before being loaded into the
- buffer, and loss of precision may apply. For example, loading a float array into a TensorBufferUint8 with values {400.32f, -23.04f}, the values will be clamped to [0, 255] and
- then be casted to uint8 by {255, 0}.
If non-empty, classifications whose label is not in this set will be filtered out.
- Duplicate or unknown labels are ignored. Mutually exclusive with labelDenyList.
-
If non-empty, classifications whose label is in this set will be filtered out. Duplicate
- or unknown labels are ignored. Mutually exclusive with labelAllowList.
-
Performs actual classification on the provided audio tensor.
-
-
Parameters
-
-
-
tensor
-
a TensorAudio containing the input audio clip in float with values
- between [-1, 1). The tensor argument should have the same flat size as the TFLite
- model's input tensor. It's recommended to create tensor using createInputTensorAudio method.
Creates an AudioRecord instance to record audio stream. The returned
- AudioRecord instance is initialized and client needs to call AudioRecord.startRecordingnull method to start recording.
The classification results of one head in a multihead (a.k.a. multi-output) AudioClassifier. A multihead AudioClassifier can perform classification for multiple
- purposes, such as a fine grained classifier to distinguish different bird sounds.
-
- public static final enum
- ImageProcessingOptions.Orientation
-
-
-
-
-
Orientation type that follows EXIF specification.
-
-
The name of each enum value defines the position of the 0th row and the 0th column of the
- image content. See the EXIF orientation
- documentation for details.
-
- public abstract class
- ImageProcessingOptions
-
-
-
-
-
Options to configure the image processing pipeline, which operates before inference.
-
-
The Task Library Vision API performs image preprocessing on the input image over the region of
- interest, so that it fits model requirements (e.g. upright 224x224 RGB) and populate the
- corresponding input tensor. This is performed by (in this order):
-
-
-
cropping the frame buffer to the region of interest (which, in most cases, just covers the
- entire input image),
-
resizing it (with bilinear interpolation, aspect-ratio *not* preserved) to the dimensions
- of the model input tensor,
-
converting it to the colorspace of the input tensor (i.e. RGB, which is the only supported
- colorspace for now),
-
IMPORTANT: as a consequence of cropping occurring first, the provided region of interest is
- expressed in the unrotated frame of reference coordinates system, i.e. in [0,
- TensorImage.getWidth()) x [0, TensorImage.getHeight()), which are the dimensions of the
- underlying image data before any orientation gets applied. If the region is out of these bounds,
- the inference method, such as ImageClassifier.classify(MlImage), will return error.
-
Initializes the TFLite Tasks Audio API. TFLite Tasks Audio API methods should only be called
- after the task returned by this method has successfully completed.
-
-
This method returns a Task<Void>, so you should wait for the task to be completed,
- but the return value of the Task is irrelevant.
-
Initializes the TFLite Tasks Audio API with the specified options. TFLite Tasks Audio API
- methods should only be called after the task returned by this method has successfully
- completed.
-
-
This method returns a Task<Void>, so you should wait for the task to be completed,
- but the return value of the Task is irrelevant.
-
Initializes the TFLite Tasks Text API. TFLite Tasks Text API methods should only be called
- after the task returned by this method has successfully completed.
-
-
This method returns a Task<Void>, so you should wait for the task to be completed,
- but the return value of the Task is irrelevant.
-
Initializes the TFLite Tasks Text API with the specified options. TFLite Tasks Text API methods
- should only be called after the task returned by this method has successfully completed.
-
-
This method returns a Task<Void>, so you should wait for the task to be completed,
- but the return value of the Task is irrelevant.
-
Sets whether to normalize the embedding feature vector with L2 norm. Defaults to false.
-
-
Use this option only if the model does not already contain a native L2_NORMALIZATION
- TFLite Op. In most cases, this is already the case and L2 norm is thus achieved through
- TFLite inference.
-
Sets whether the embedding should be quantized to bytes via scalar quantization. Defaults to
- false.
-
-
Embeddings are implicitly assumed to be unit-norm and therefore any dimension is
- guaranteed to have a value in [-1.0, 1.0]. Use the l2_normalize option if this is not
- the case.
-
Classifier API for NLClassification tasks with Bert models, categorizes string into different
- classes. The API expects a Bert based TFLite model with metadata populated.
-
-
The metadata should contain the following information:
-
-
-
1 input_process_unit for Wordpiece/Sentencepiece Tokenizer.
-
3 input tensors with names "ids", "mask" and "segment_ids".
-
1 output tensor of type float32[1, 2], with a optionally attached label file. If a label
- file is attached, the file should be a plain text file with one label per line, the number
- of labels should match the number of categories the model outputs.
-
Set the index of the input text tensor among all input tensors, if the model has multiple
- inputs. Only the input tensor specified will be used for inference; other input tensors
- will be ignored. Dafualt to 0.
-
-
See the section, Configure the input/output tensors for NLClassifier, for more details.
-
Set the name of the input text tensor, if the model has multiple inputs. Only the input
- tensor specified will be used for inference; other input tensors will be ignored. Dafualt
- to "INPUT".
-
-
See the section, Configure the input/output tensors for NLClassifier, for more details.
-
Set the name of the output label tensor, if the model has multiple outputs. Dafualt to
- "OUTPUT_LABEL".
-
-
See the section, Configure the input/output tensors for NLClassifier, for more details.
-
-
By default, label file should be packed with the output score tensor through Model
- Metadata. See the MetadataWriter
- for NLClassifier. NLClassifier reads and parses labels from the label file
- automatically. However, some models may output a specific label tensor instead. In this
- case, NLClassifier reads labels from the output label tensor.
-
output scores for each class, if type is one of the Int types, dequantize it, if it
- is Bool type, convert the values to 0.0 and 1.0 respectively.
-
can have an optional associated file in metadata for labels, the file should be a
- plain text file with one label per line, the number of labels should match the number
- of categories the model outputs. Output label tensor: optional (kTfLiteString) -
- output classname for each class, should be of the same length with scores. If this
- tensor is not present, the API uses score indices as classnames. - will be ignored if
- output score tensor already has an associated label file.
-
output classname for each class, should be of the same length with scores. If this
- tensor is not present, the API uses score indices as classnames.
-
will be ignored if output score tensor already has an associated labe file.
-
-
-
By default the API tries to find the input/output tensors with default configurations in
- NLClassifier.NLClassifierOptions, with tensor name prioritized over tensor index. The option is
- configurable for different TFLite models.
-
Returns the most possible answers on a given question for QA models (BERT, Albert, etc.).
-
-
The API expects a Bert based TFLite model with metadata containing the following information:
-
-
-
input_process_units for Wordpiece/Sentencepiece Tokenizer - Wordpiece Tokenizer can be used
- for a MobileBert model,
- Sentencepiece Tokenizer Tokenizer can be used for an Albert model.
-
3 input tensors with names "ids", "mask" and "segment_ids".
-
2 output tensors with names "end_logits" and "start_logits".
-
The API expects a TFLite model with optional, but strongly recommended, TFLite Model Metadata..
-
-
The API expects a TFLite model with metadata populated. The metadata should contain the
- following information:
-
-
-
For Bert based TFLite model:
-
-
3 input tensors of type kTfLiteString with names "ids", "mask" and "segment_ids".
-
input_process_units for Wordpiece/Sentencepiece Tokenizer
-
exactly one output tensor of type kTfLiteFloat32
-
-
For Regex based TFLite model:
-
-
1 input tensor.
-
input_process_units for RegexTokenizer Tokenizer
-
exactly one output tensor of type kTfLiteFloat32
-
-
For Universal Sentence Encoder based TFLite model:
-
-
3 input tensors with names "inp_text", "res_context" and "res_text"
-
2 output tensors with names "query_encoding" and "response_encoding" of type
- kTfLiteFloat32
-
-
-
TODO(b/180502532): add pointer to example model.
-
-
TODO(b/222671076): add factory create methods without options, such as `createFromFile`, once
- the single file format (index file packed in the model) is supported.
-
The classification results of one head in a multihead (a.k.a. multi-output) ImageClassifier. A multihead ImageClassifier can perform classification for multiple
- purposes, such as a fine grained classifier to describe apparel items (e.g. color, material,
- type, etc.).
-
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
If non-empty, classifications whose label is not in this set will be filtered out.
- Duplicate or unknown labels are ignored. Mutually exclusive with labelDenyList.
-
If non-empty, classifications whose label is in this set will be filtered out. Duplicate
- or unknown labels are ignored. Mutually exclusive with labelAllowList.
-
-
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
-
-
Sets the number of threads to be used for TFLite ops that support multi-threading when
- running inference with CPU. Defaults to -1.
-
-
numThreads should be greater than 0 or equal to -1. Setting numThreads to -1 has the
- effect to let TFLite runtime set the value.
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
Sets the maximum number of top-scored detection results to return.
-
-
If < 0, all available results will be returned. If 0, an invalid argument error is
- returned. Note that models may intrinsically be limited to returning a maximum number of
- results N: if the provided value here is above N, only N results will be returned. Defaults
- to -1.
-
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
-
-
Sets the number of threads to be used for TFLite ops that support multi-threading when
- running inference with CPU. Defaults to -1.
-
-
numThreads should be greater than 0 or equal to -1. Setting numThreads to -1 has the
- effect to let TFLite runtime set the value.
image input of size [batch x height x width x channels].
-
batch inference is not supported (batch is required to be 1).
-
only RGB inputs are supported (channels is required to be 3).
-
if type is kTfLiteFloat32, NormalizationOptions are required to be attached
- to the metadata for input normalization.
-
-
Output tensor (kTfLiteUInt8/kTfLiteFloat32)
-
-
N components corresponding to the N dimensions of the returned
- feature vector for this output layer.
-
Either 2 or 4 dimensions, i.e. [1 x N] or [1 x 1 x 1 x N].
-
-
-
TODO(b/180502532): add pointer to example model.
-
-
TODO(b/222671076): add factory create methods without options, such as `createFromFile`, once
- the single file format (index file packed in the model) is supported.
-
the color components for the label. The Color instatnce is supported on Android
- API level 26 and above. For API level lower than 26, use create(String, String, int). See Android
- Color instances. for more details.
-
-
-
-
-
-
-
-
-
-
- public
- static
-
-
-
- ColoredLabel
-
-create
-(String label, String displayName, int argb)
-
-
-
-
-
-
Creates a ColoredLabel object with an ARGB color int.
-
-
Parameters
-
-
-
label
-
the label string, as provided in the label map packed in the TFLite Model
- Metadata.
Gets the Color instance of the underlying color.
-
-
The Color instatnce is supported on Android API level 26 and above. For API level lower than
- 26, use getArgb(). See Android
- Color instances. for more details.
-
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
-
- This method is deprecated. use BaseOptions to configure number of threads instead. This method
- will override the number of threads configured from BaseOptions.
-
-
-
Sets the number of threads to be used for TFLite ops that support multi-threading when
- running inference with CPU. Defaults to -1.
-
-
numThreads should be greater than 0 or equal to -1. Setting numThreads to -1 has the
- effect to let TFLite runtime set the value.
tensor of size [batch x mask_height x mask_width x num_classes], where batch is required to be 1, mask_width and mask_height are the
- dimensions of the segmentation masks produced by the model, and num_classes
- is the number of classes supported by the model.
-
optional (but recommended) label map(s) can be attached as AssociatedFile-s with type
- TENSOR_AXIS_LABELS, containing one label per line. The first such AssociatedFile (if
- any) is used to fill the class name, i.e. ColoredLabel.getlabel() of the
- results. The display name, i.e. ColoredLabel.getDisplayName(), is filled from
- the AssociatedFile (if any) whose locale matches the `display_names_locale` field of
- the `ImageSegmenterOptions` used at creation time ("en" by default, i.e. English). If
- none of these are available, only the `index` field of the results will be filled.
-
a UINT8 TensorImage object that represents an RGB or YUV image
-
-
-
-
-
Returns
-
results of performing image segmentation. Note that at the time, a single Segmentation element is expected to be returned. The result is stored in a List
- for later extension to e.g. instance segmentation models, which may return one segmentation
- per object.
Performs actual segmentation on the provided MlImage.
-
-
Parameters
-
-
-
image
-
an MlImage to segment.
-
-
-
-
-
Returns
-
results of performing image segmentation. Note that at the time, a single Segmentation element is expected to be returned. The result is stored in a List
- for later extension to e.g. instance segmentation models, which may return one segmentation
- per object.
a UINT8 TensorImage object that represents an RGB or YUV image
-
-
-
options
-
the options configure how to preprocess the image
-
-
-
-
-
Returns
-
results of performing image segmentation. Note that at the time, a single Segmentation element is expected to be returned. The result is stored in a List
- for later extension to e.g. instance segmentation models, which may return one segmentation
- per object.
the options configure how to preprocess the image.
-
-
-
-
-
Returns
-
results of performing image segmentation. Note that at the time, a single Segmentation element is expected to be returned. The result is stored in a List
- for later extension to e.g. instance segmentation models, which may return one segmentation
- per object.
-
-
-
-Public API for tf.lite namespace.
-
-
-
-## Modules
-
-[`experimental`](../tf/lite/experimental) module: Public API for tf.lite.experimental namespace.
-
-## Classes
-
-[`class Interpreter`](../tf/lite/Interpreter): Interpreter interface for running TensorFlow Lite models.
-
-[`class OpsSet`](../tf/lite/OpsSet): Enum class defining the sets of ops available to generate TFLite models.
-
-[`class Optimize`](../tf/lite/Optimize): Enum defining the optimizations to apply when generating a tflite model.
-
-[`class RepresentativeDataset`](../tf/lite/RepresentativeDataset): Representative dataset used to optimize the model.
-
-[`class TFLiteConverter`](../tf/lite/TFLiteConverter): Converts a TensorFlow model into TensorFlow Lite model.
-
-[`class TargetSpec`](../tf/lite/TargetSpec): Specification of target device used to optimize the model.
diff --git a/site/en/lite/api_docs/python/tf/lite/Interpreter.md b/site/en/lite/api_docs/python/tf/lite/Interpreter.md
deleted file mode 100644
index 671ec34e5a..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/Interpreter.md
+++ /dev/null
@@ -1,778 +0,0 @@
-page_type: reference
-description: Interpreter interface for running TensorFlow Lite models.
-
-
-
-
-
-
-
-
-
-Models obtained from `TfLiteConverter` can be run in Python with
-`Interpreter`.
-
-As an example, lets generate a simple Keras model and convert it to TFLite
-(`TfLiteConverter` also supports other input formats with `from_saved_model`
-and `from_concrete_function`)
-
-
-
-
-`tflite_model` can be saved to a file and loaded later, or directly into the
-`Interpreter`. Since TensorFlow Lite pre-plans tensor allocations to optimize
-inference, the user needs to call `allocate_tensors()` before any inference.
-
-
-interpreter = tf.lite.Interpreter(model_content=tflite_model)
-interpreter.allocate_tensors() # Needed before execution!
-
-
-
-#### Sample execution:
-
-
-output = interpreter.get_output_details()[0] # Model has single output.
-input = interpreter.get_input_details()[0] # Model has single input.
-input_data = tf.constant(1., shape=[1, 1])
-interpreter.set_tensor(input['index'], input_data)
-interpreter.invoke()
-interpreter.get_tensor(output['index']).shape
-(1, 1)
-
-
-
-Use `get_signature_runner()` for a more user-friendly inference API.
-
-
-
-
-
Args
-
-
-
-`model_path`
-
-
-Path to TF-Lite Flatbuffer file.
-
-
-
-`model_content`
-
-
-Content of model.
-
-
-
-`experimental_delegates`
-
-
-Experimental. Subject to change. List of
-[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates)
- objects returned by lite.load_delegate().
-
-
-
-`num_threads`
-
-
-Sets the number of threads used by the interpreter and
-available to CPU kernels. If not set, the interpreter will use an
-implementation-dependent default number of threads. Currently, only a
-subset of kernels, such as conv, support multi-threading. num_threads
-should be >= -1. Setting num_threads to 0 has the effect to disable
-multithreading, which is equivalent to setting num_threads to 1. If set
-to the value -1, the number of threads used will be
-implementation-defined and platform-dependent.
-
-
-
-`experimental_op_resolver_type`
-
-
-The op resolver used by the interpreter. It
-must be an instance of OpResolverType. By default, we use the built-in
-op resolver which corresponds to tflite::ops::builtin::BuiltinOpResolver
-in C++.
-
-
-
-`experimental_preserve_all_tensors`
-
-
-If true, then intermediate tensors used
-during computation are preserved for inspection, and if the passed op
-resolver type is AUTO or BUILTIN, the type will be changed to
-BUILTIN_WITHOUT_DEFAULT_DELEGATES so that no Tensorflow Lite default
-delegates are applied. If false, getting intermediate tensors could
-result in undefined values or None, especially when the graph is
-successfully modified by the Tensorflow Lite default delegate.
-
-A list in which each item is a dictionary with details about
-an input tensor. Each dictionary contains the following fields
-that describe the tensor:
-
-+ `name`: The tensor name.
-+ `index`: The tensor index in the interpreter.
-+ `shape`: The shape of the tensor.
-+ `shape_signature`: Same as `shape` for models with known/fixed shapes.
- If any dimension sizes are unknown, they are indicated with `-1`.
-
-+ `dtype`: The numpy data type (such as `np.int32` or `np.uint8`).
-+ `quantization`: Deprecated, use `quantization_parameters`. This field
- only works for per-tensor quantization, whereas
- `quantization_parameters` works in all cases.
-+ `quantization_parameters`: A dictionary of parameters used to quantize
- the tensor:
- ~ `scales`: List of scales (one if per-tensor quantization).
- ~ `zero_points`: List of zero_points (one if per-tensor quantization).
- ~ `quantized_dimension`: Specifies the dimension of per-axis
- quantization, in the case of multiple scales/zero_points.
-+ `sparsity_parameters`: A dictionary of parameters used to encode a
- sparse tensor. This is empty if the tensor is dense.
-
-A list in which each item is a dictionary with details about
-an output tensor. The dictionary contains the same fields as
-described for `get_input_details()`.
-
-
-Gets list of SignatureDefs in the model.
-
-Example,
-
-```
-signatures = interpreter.get_signature_list()
-print(signatures)
-
-# {
-# 'add': {'inputs': ['x', 'y'], 'outputs': ['output_0']}
-# }
-
-Then using the names in the signature list you can get a callable from
-get_signature_runner().
-```
-
-
-
-
-
Returns
-
-
-A list of SignatureDef details in a dictionary structure.
-It is keyed on the SignatureDef method name, and the value holds
-dictionary of inputs and outputs.
-
-
-Gets callable for inference of specific SignatureDef.
-
-Example usage,
-
-```
-interpreter = tf.lite.Interpreter(model_content=tflite_model)
-interpreter.allocate_tensors()
-fn = interpreter.get_signature_runner('div_with_remainder')
-output = fn(x=np.array([3]), y=np.array([2]))
-print(output)
-# {
-# 'quotient': array([1.], dtype=float32)
-# 'remainder': array([1.], dtype=float32)
-# }
-```
-
-None can be passed for signature_key if the model has a single Signature
-only.
-
-All names used are this specific SignatureDef names.
-
-
-
-
-
-
Args
-
-
-
-`signature_key`
-
-
-Signature key for the SignatureDef, it can be None if and
-only if the model has a single SignatureDef. Default value is None.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-This returns a callable that can run inference for SignatureDef defined
-by argument 'signature_key'.
-The callable will take key arguments corresponding to the arguments of the
-SignatureDef, that should have numpy values.
-The callable will returns dictionary that maps from output names to numpy
-values of the computed results.
-
-
-Gets the value of the output tensor (get a copy).
-
-If you wish to avoid the copy, use `tensor()`. This function cannot be used
-to read intermediate results.
-
-
-
-
-
Args
-
-
-
-`tensor_index`
-
-
-Tensor index of tensor to get. This value can be gotten from
-the 'index' field in get_output_details.
-
-
-
-`subgraph_index`
-
-
-Index of the subgraph to fetch the tensor. Default value
-is 0, which means to fetch from the primary subgraph.
-
-
-Gets tensor details for every tensor with valid tensor details.
-
-Tensors where required information about the tensor is not found are not
-added to the list. This includes temporary tensors without a name.
-
-
-
-
-
Returns
-
-
-A list of dictionaries containing tensor information.
-
-
-Invoke the interpreter.
-
-Be sure to set the input sizes, allocate tensors and fill values before
-calling this. Also, note that this function releases the GIL so heavy
-computation can be done in the background while the Python interpreter
-continues. No other function on this object should be called while the
-invoke() call has not finished.
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-When the underlying interpreter fails raise ValueError.
-
-Tensor index of input to set. This value can be gotten from
-the 'index' field in get_input_details.
-
-
-
-`tensor_size`
-
-
-The tensor_shape to resize the input to.
-
-
-
-`strict`
-
-
-Only unknown dimensions can be resized when `strict` is True.
-Unknown dimensions are indicated as `-1` in the `shape_signature`
-attribute of a given tensor. (default False)
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-If the interpreter could not resize the input tensor.
-
-
-Sets the value of the input tensor.
-
-Note this copies data in `value`.
-
-If you want to avoid copying, you can use the `tensor()` function to get a
-numpy buffer pointing to the input buffer in the tflite interpreter.
-
-
-
-
-
Args
-
-
-
-`tensor_index`
-
-
-Tensor index of tensor to set. This value can be gotten from
-the 'index' field in get_input_details.
-
-
-Returns function that gives a numpy view of the current tensor buffer.
-
-This allows reading and writing to this tensors w/o copies. This more
-closely mirrors the C++ Interpreter class interface's tensor() member, hence
-the name. Be careful to not hold these output references through calls
-to `allocate_tensors()` and `invoke()`. This function cannot be used to read
-intermediate results.
-
-#### Usage:
-
-
-
-```
-interpreter.allocate_tensors()
-input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
-output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
-for i in range(10):
- input().fill(3.)
- interpreter.invoke()
- print("inference %s" % output())
-```
-
-Notice how this function avoids making a numpy array directly. This is
-because it is important to not hold actual numpy views to the data longer
-than necessary. If you do, then the interpreter can no longer be invoked,
-because it is possible the interpreter would resize and invalidate the
-referenced tensors. The NumPy API doesn't allow any mutability of the
-the underlying buffers.
-
-#### WRONG:
-
-
-
-```
-input = interpreter.tensor(interpreter.get_input_details()[0]["index"])()
-output = interpreter.tensor(interpreter.get_output_details()[0]["index"])()
-interpreter.allocate_tensors() # This will throw RuntimeError
-for i in range(10):
- input.fill(3.)
- interpreter.invoke() # this will throw RuntimeError since input,output
-```
-
-
-
-
-
Args
-
-
-
-`tensor_index`
-
-
-Tensor index of tensor to get. This value can be gotten from
-the 'index' field in get_output_details.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-A function that can return a new numpy array pointing to the internal
-TFLite tensor state at any point. It is safe to hold the function forever,
-but it is not safe to hold the numpy array forever.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/OpsSet.md b/site/en/lite/api_docs/python/tf/lite/OpsSet.md
deleted file mode 100644
index fd0a22dca4..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/OpsSet.md
+++ /dev/null
@@ -1,75 +0,0 @@
-page_type: reference
-description: Enum class defining the sets of ops available to generate TFLite models.
-
-
-
-
-
-
-
-
-
-
-Enum defining the optimizations to apply when generating a tflite model.
-
-
-
-DEFAULT
- The default optimization strategy that enables post-training quantization.
- The type of post-training quantization that will be used is dependent on
- the other converter options supplied. Refer to the
- [documentation](/lite/performance/post_training_quantization) for further
- information on the types available and how to use them.
-
-OPTIMIZE_FOR_SIZE
- Deprecated. Does the same as DEFAULT.
-
-OPTIMIZE_FOR_LATENCY
- Deprecated. Does the same as DEFAULT.
-
-EXPERIMENTAL_SPARSITY
- Experimental flag, subject to change.
-
- Enable optimization by taking advantage of the sparse model weights
- trained with pruning.
-
- The converter will inspect the sparsity pattern of the model weights and
- do its best to improve size and latency.
- The flag can be used alone to optimize float32 models with sparse weights.
- It can also be used together with the DEFAULT optimization mode to
- optimize quantized models with sparse weights.
-
-
-
-
-
-
-
Class Variables
-
-
-
-DEFAULT
-
-
-``
-
-
-
-EXPERIMENTAL_SPARSITY
-
-
-``
-
-
-
-OPTIMIZE_FOR_LATENCY
-
-
-``
-
-
-
-OPTIMIZE_FOR_SIZE
-
-
-``
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/RepresentativeDataset.md b/site/en/lite/api_docs/python/tf/lite/RepresentativeDataset.md
deleted file mode 100644
index d6b693e2a5..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/RepresentativeDataset.md
+++ /dev/null
@@ -1,65 +0,0 @@
-page_type: reference
-description: Representative dataset used to optimize the model.
-
-
-
-
-
-
-
-
-
-
-Representative dataset used to optimize the model.
-
-
-tf.lite.RepresentativeDataset(
- input_gen
-)
-
-
-
-
-
-
-This is a generator function that provides a small dataset to calibrate or
-estimate the range, i.e, (min, max) of all floating-point arrays in the model
-(such as model input, activation outputs of intermediate layers, and model
-output) for quantization. Usually, this is a small subset of a few hundred
-samples randomly chosen, in no particular order, from the training or
-evaluation dataset.
-
-
-
-
-
Args
-
-
-
-`input_gen`
-
-
-A generator function that generates input samples for the
-model and has the same order, type and shape as the inputs to the model.
-Usually, this is a small subset of a few hundred samples randomly
-chosen, in no particular order, from the training or evaluation dataset.
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/TFLiteConverter.md b/site/en/lite/api_docs/python/tf/lite/TFLiteConverter.md
deleted file mode 100644
index 337210ecf0..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/TFLiteConverter.md
+++ /dev/null
@@ -1,535 +0,0 @@
-page_type: reference
-description: Converts a TensorFlow model into TensorFlow Lite model.
-
-
-
-
-
-
-
-
-
-
-#### Example usage:
-
-
-
-```python
-# Converting a SavedModel to a TensorFlow Lite model.
- converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
- tflite_model = converter.convert()
-
-# Converting a tf.Keras model to a TensorFlow Lite model.
-converter = tf.lite.TFLiteConverter.from_keras_model(model)
-tflite_model = converter.convert()
-
-# Converting ConcreteFunctions to a TensorFlow Lite model.
-converter = tf.lite.TFLiteConverter.from_concrete_functions([func], model)
-tflite_model = converter.convert()
-
-# Converting a Jax model to a TensorFlow Lite model.
-converter = tf.lite.TFLiteConverter.experimental_from_jax([func], [[
- ('input1', input1), ('input2', input2)]])
-tflite_model = converter.convert()
-```
-
-
-
-
-
Args
-
-
-
-`funcs`
-
-
-List of TensorFlow ConcreteFunctions. The list should not contain
-duplicate elements.
-
-
-
-`trackable_obj`
-
-
-tf.AutoTrackable object associated with `funcs`. A
-reference to this object needs to be maintained so that Variables do not
-get garbage collected since functions have a weak reference to
-Variables. This is only required when the tf.AutoTrackable object is not
-maintained by the user (e.g. `from_saved_model`).
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`optimizations`
-
-
-Experimental flag, subject to change. Set of optimizations to
-apply. e.g {tf.lite.Optimize.DEFAULT}. (default None, must be None or a
-set of values of type tf.lite.Optimize)
-
-
-
-`representative_dataset`
-
-
-A generator function used for integer quantization
-where each generated sample has the same order, type and shape as the
-inputs to the model. Usually, this is a small subset of a few hundred
-samples randomly chosen, in no particular order, from the training or
-evaluation dataset. This is an optional attribute, but required for full
-integer quantization, i.e, if tf.int8 is the only supported type in
-`target_spec.supported_types`. Refer to tf.lite.RepresentativeDataset.
-(default None)
-
-
-
-`target_spec`
-
-
-Experimental flag, subject to change. Specifications of target
-device, including supported ops set, supported types and a set of user's
-defined TensorFlow operators required in the TensorFlow Lite runtime.
-Refer to tf.lite.TargetSpec.
-
-
-
-`inference_input_type`
-
-
-Data type of the input layer. Note that integer types
-(tf.int8 and tf.uint8) are currently only supported for post training
-integer quantization and quantization aware training. (default tf.float32,
-must be in {tf.float32, tf.int8, tf.uint8})
-
-
-
-`inference_output_type`
-
-
-Data type of the output layer. Note that integer
-types (tf.int8 and tf.uint8) are currently only supported for post
-training integer quantization and quantization aware training. (default
-tf.float32, must be in {tf.float32, tf.int8, tf.uint8})
-
-
-
-`allow_custom_ops`
-
-
-Boolean indicating whether to allow custom operations.
-When False, any unknown operation is an error. When True, custom ops are
-created for any op that is unknown. The developer needs to provide these
-to the TensorFlow Lite runtime with a custom resolver. (default False)
-
-
-
-`exclude_conversion_metadata`
-
-
-Whether not to embed the conversion metadata
-into the converted model. (default False)
-
-Experimental flag, subject to change. Enables
-MLIR-based quantization conversion instead of Flatbuffer-based conversion.
-(default True)
-
-
-
-`experimental_enable_resource_variables`
-
-
-Experimental flag, subject to
-change. Enables
-[resource variables](https://tensorflow.org/guide/migrate/tf1_vs_tf2#resourcevariables_instead_of_referencevariables)
-to be converted by this converter. This is only allowed if the
-from_saved_model interface is used. (default True)
-
-
-Creates a TFLiteConverter object from ConcreteFunctions.
-
-
-
-
-
-
Args
-
-
-
-`funcs`
-
-
-List of TensorFlow ConcreteFunctions. The list should not contain
-duplicate elements. Currently converter can only convert a single
-ConcreteFunction. Converting multiple functions is under development.
-
-
-
-`trackable_obj`
-
-
- An `AutoTrackable` object (typically `tf.module`)
-associated with `funcs`. A reference to this object needs to be
-maintained so that Variables do not get garbage collected since
-functions have a weak reference to Variables.
-
-
-Creates a TFLiteConverter object from a SavedModel directory.
-
-
-
-
-
-
Args
-
-
-
-`saved_model_dir`
-
-
-SavedModel directory to convert.
-
-
-
-`signature_keys`
-
-
-List of keys identifying SignatureDef containing inputs
-and outputs. Elements should not be duplicated. By default the
-`signatures` attribute of the MetaGraphdef is used. (default
-saved_model.signatures)
-
-
-
-`tags`
-
-
-Set of tags identifying the MetaGraphDef within the SavedModel to
-analyze. All tags in the tag set must be present. (default
-{tf.saved_model.SERVING} or {'serve'})
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-TFLiteConverter object.
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-Invalid signature keys.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/TargetSpec.md b/site/en/lite/api_docs/python/tf/lite/TargetSpec.md
deleted file mode 100644
index e7798cedb2..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/TargetSpec.md
+++ /dev/null
@@ -1,113 +0,0 @@
-page_type: reference
-description: Specification of target device used to optimize the model.
-
-
-
-
-
-
-
-Experimental flag, subject to change. Set of tf.lite.OpsSet
-options, where each option represents a set of operators supported by the
-target device. (default {tf.lite.OpsSet.TFLITE_BUILTINS}))
-
-
-
-`supported_types`
-
-
-Set of tf.dtypes.DType data types supported on the target
-device. If initialized, optimization might be driven by the smallest type
-in this set. (default set())
-
-
-
-`experimental_select_user_tf_ops`
-
-
-Experimental flag, subject to change. Set
-of user's TensorFlow operators' names that are required in the TensorFlow
-Lite runtime. These ops will be exported as select TensorFlow ops in the
-model (in conjunction with the tf.lite.OpsSet.SELECT_TF_OPS flag). This is
-an advanced feature that should only be used if the client is using TF ops
-that may not be linked in by default with the TF ops that are provided
-when using the SELECT_TF_OPS path. The client is responsible for linking
-these ops into the target runtime.
-
-
-
-`experimental_supported_backends`
-
-
-Experimental flag, subject to change.
-Set containing names of supported backends. Currently only "GPU" is
-supported, more options will be available later.
-
-
-
-
-Public API for tf.lite.experimental namespace.
-
-
-
-## Modules
-
-[`authoring`](../../tf/lite/experimental/authoring) module: Public API for tf.lite.experimental.authoring namespace.
-
-## Classes
-
-[`class Analyzer`](../../tf/lite/experimental/Analyzer): Provides a collection of TFLite model analyzer tools.
-
-[`class OpResolverType`](../../tf/lite/experimental/OpResolverType): Different types of op resolvers for Tensorflow Lite.
-
-[`class QuantizationDebugOptions`](../../tf/lite/experimental/QuantizationDebugOptions): Debug options to set up a given QuantizationDebugger.
-
-[`class QuantizationDebugger`](../../tf/lite/experimental/QuantizationDebugger): Debugger for Quantized TensorFlow Lite debug mode models.
-
-## Functions
-
-[`load_delegate(...)`](../../tf/lite/experimental/load_delegate): Returns loaded Delegate object.
diff --git a/site/en/lite/api_docs/python/tf/lite/experimental/Analyzer.md b/site/en/lite/api_docs/python/tf/lite/experimental/Analyzer.md
deleted file mode 100644
index 328ad057cd..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/experimental/Analyzer.md
+++ /dev/null
@@ -1,149 +0,0 @@
-page_type: reference
-description: Provides a collection of TFLite model analyzer tools.
-
-
-
-
-
-
-
-
-Analyzes the given tflite_model with dumping model structure.
-
-This tool provides a way to understand users' TFLite flatbuffer model by
-dumping internal graph structure. It also provides additional features
-like checking GPU delegate compatibility.
-
-Warning: Experimental interface, subject to change.
- The output format is not guaranteed to stay stable, so don't
- write scripts to this.
-
-
-
-
-
Args
-
-
-
-`model_path`
-
-
-TFLite flatbuffer model path.
-
-
-
-`model_content`
-
-
-TFLite flatbuffer model object.
-
-
-
-`gpu_compatibility`
-
-
-Whether to check GPU delegate compatibility.
-
-
-
-`**kwargs`
-
-
-Experimental keyword arguments to analyze API.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-Print analyzed report via console output.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/experimental/OpResolverType.md b/site/en/lite/api_docs/python/tf/lite/experimental/OpResolverType.md
deleted file mode 100644
index 03f8bae159..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/experimental/OpResolverType.md
+++ /dev/null
@@ -1,85 +0,0 @@
-page_type: reference
-description: Different types of op resolvers for Tensorflow Lite.
-
-
-
-
-
-
-
-
-
-
-Different types of op resolvers for Tensorflow Lite.
-
-
-
-* `AUTO`: Indicates the op resolver that is chosen by default in TfLite
- Python, which is the "BUILTIN" as described below.
-* `BUILTIN`: Indicates the op resolver for built-in ops with optimized kernel
- implementation.
-* `BUILTIN_REF`: Indicates the op resolver for built-in ops with reference
- kernel implementation. It's generally used for testing and debugging.
-* `BUILTIN_WITHOUT_DEFAULT_DELEGATES`: Indicates the op resolver for
- built-in ops with optimized kernel implementation, but it will disable
- the application of default TfLite delegates (like the XNNPACK delegate) to
- the model graph. Generally this should not be used unless there are issues
- with the default configuration.
-
-
-
-
-
-
-
Class Variables
-
-
-
-AUTO
-
-
-``
-
-
-
-BUILTIN
-
-
-``
-
-
-
-BUILTIN_REF
-
-
-``
-
-
-
-BUILTIN_WITHOUT_DEFAULT_DELEGATES
-
-
-``
-
-
-
diff --git a/site/en/lite/api_docs/python/tf/lite/experimental/QuantizationDebugOptions.md b/site/en/lite/api_docs/python/tf/lite/experimental/QuantizationDebugOptions.md
deleted file mode 100644
index 5743c638c1..0000000000
--- a/site/en/lite/api_docs/python/tf/lite/experimental/QuantizationDebugOptions.md
+++ /dev/null
@@ -1,149 +0,0 @@
-page_type: reference
-description: Debug options to set up a given QuantizationDebugger.
-
-
-
-
-
-
-
-a dict to specify layer debug functions
-{function_name_str: function} where the function accepts result of
- NumericVerify Op, which is value difference between float and
- dequantized op results. The function returns single scalar value.
-
-
-
-`model_debug_metrics`
-
-
-a dict to specify model debug functions
-{function_name_str: function} where the function accepts outputs from
- two models, and returns single scalar value for a metric. (e.g.
- accuracy, IoU)
-
-
-
-`layer_direct_compare_metrics`
-
-
-a dict to specify layer debug functions
-{function_name_str: function}. The signature is different from that of
- `layer_debug_metrics`, and this one gets passed (original float value,
- original quantized value, scale, zero point). The function's
- implementation is responsible for correctly dequantize the quantized
- value to compare. Use this one when comparing diff is not enough.
- (Note) quantized value is passed as int8, so cast to int32 is needed.
-
-
-
-`denylisted_ops`
-
-
-a list of op names which is expected to be removed from
-quantization.
-
-
-
-`denylisted_nodes`
-
-
-a list of op's output tensor names to be removed from
-quantization.
-
-
-
-`fully_quantize`
-
-
-Bool indicating whether to fully quantize the model.
-Besides model body, the input/output will be quantized as well.
-Corresponding to mlir_quantize's fully_quantize parameter.
-
-
-
-This can run the TensorFlow Lite converted models equipped with debug ops and
-collect debug information. This debugger calculates statistics from
-user-defined post-processing functions as well as default ones.
-
-
-
-
-
Args
-
-
-
-`quant_debug_model_path`
-
-
-Path to the quantized debug TFLite model file.
-
-
-
-`quant_debug_model_content`
-
-
-Content of the quantized debug TFLite model.
-
-
-
-`float_model_path`
-
-
-Path to float TFLite model file.
-
-
-
-`float_model_content`
-
-
-Content of the float TFLite model.
-
-
-
-`debug_dataset`
-
-
-a factory function that returns dataset generator which is
-used to generate input samples (list of np.ndarray) for the model. The
-generated elements must have same types and shape as inputs to the
-model.
-
-
-
-`debug_options`
-
-
-Debug options to debug the given model.
-
-
-
-`converter`
-
-
-Optional, use converter instead of quantized model.
-
-
-Returns an instrumented quantized model.
-
-Convert the quantized model with the initialized converter and
-return bytes for model. The model will be instrumented with numeric
-verification operations and should only be used for debugging.
-
-
-
-
-Returns a non-instrumented quantized model.
-
-Convert the quantized model with the initialized converter and
-return bytes for nondebug model. The model will not be instrumented with
-numeric verification operations.
-
-
-
-
-
-
-
-
-
-#### Example usage:
-
-
-
-```
-import tensorflow as tf
-
-try:
- delegate = tf.lite.experimental.load_delegate('delegate.so')
-except ValueError:
- // Fallback to CPU
-
-if delegate:
- interpreter = tf.lite.Interpreter(
- model_path='model.tflite',
- experimental_delegates=[delegate])
-else:
- interpreter = tf.lite.Interpreter(model_path='model.tflite')
-```
-
-This is typically used to leverage EdgeTPU for running TensorFlow Lite models.
-For more information see: https://coral.ai/docs/edgetpu/tflite-python/
-
-
-
-
-
Args
-
-
-
-`library`
-
-
-Name of shared library containing the
-[TfLiteDelegate](https://www.tensorflow.org/lite/performance/delegates).
-
-
-
-`options`
-
-
-Dictionary of options that are required to load the delegate. All
-keys and values in the dictionary should be convertible to str. Consult
-the documentation of the specific delegate for required and legal options.
-(default None)
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-Delegate object.
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-Delegate failed to load.
-
-
-
-`RuntimeError`
-
-
-If delegate loading is used on unsupported platform.
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker.md b/site/en/lite/api_docs/python/tflite_model_maker.md
deleted file mode 100644
index 38c8da298a..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker.md
+++ /dev/null
@@ -1,94 +0,0 @@
-page_type: reference
-description: Public APIs for TFLite Model Maker, a transfer learning library to train custom TFLite models.
-
-
-
-
-
-
-
-
-
-
-Public APIs for TFLite Model Maker, a transfer learning library to train custom TFLite models.
-
-
-You can install the package with
-
-```bash
-pip install tflite-model-maker
-```
-
-Typical usage of Model Maker is to create a model in a few lines of code, e.g.:
-
-```python
-# Load input data specific to an on-device ML app.
-data = DataLoader.from_folder('flower_photos/')
-train_data, test_data = data.split(0.9)
-
-# Customize the TensorFlow model.
-model = image_classifier.create(train_data)
-
-# Evaluate the model.
-accuracy = model.evaluate(test_data)
-
-# Export to Tensorflow Lite model and label file in `export_dir`.
-model.export(export_dir='/tmp/')
-```
-
-For more details, please refer to our guide:
-https://www.tensorflow.org/lite/guide/model_maker
-
-## Modules
-
-[`audio_classifier`](./tflite_model_maker/audio_classifier) module: APIs to train an audio classification model.
-
-[`config`](./tflite_model_maker/config) module: APIs for the config of TFLite Model Maker.
-
-[`image_classifier`](./tflite_model_maker/image_classifier) module: APIs to train an image classification model.
-
-[`model_spec`](./tflite_model_maker/model_spec) module: APIs for the model spec of TFLite Model Maker.
-
-[`object_detector`](./tflite_model_maker/object_detector) module: APIs to train an object detection model.
-
-[`question_answer`](./tflite_model_maker/question_answer) module: APIs to train a model that can answer questions based on a predefined text.
-
-[`recommendation`](./tflite_model_maker/recommendation) module: APIs to train an on-device recommendation model.
-
-[`searcher`](./tflite_model_maker/searcher) module: APIs to create the searcher model.
-
-[`text_classifier`](./tflite_model_maker/text_classifier) module: APIs to train a text classification model.
-
-
-
-
-
-A instance of audio_dataloader.DataLoader class.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`validation_data`
-
-
-Validation DataLoader. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Number of samples per training step. If `use_hub_library` is
-False, it represents the base learning rate when train batch size is 256
-and it's linear to the batch size.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`model_dir`
-
-
-The location of the model checkpoint files.
-
-
-
-`do_train`
-
-
-Whether to run training.
-
-
-
-`train_whole_model`
-
-
-Boolean. By default, only the classification head is
-trained. When True, the base model is also trained.
-
-
-Converts the retrained model to tflite format and saves it.
-
-This method overrides the default `CustomModel._export_tflite` method, and
-include the pre-processing in the exported TFLite library since support
-library can't handle audio tasks yet.
-
-
-
-
-
Args
-
-
-
-`model`
-
-
-An instance of the keras classification model to be exported.
-
-
-
-`tflite_filepath`
-
-
-File path to save tflite model.
-
-
-
-`with_metadata`
-
-
-Whether the output tflite model contains metadata.
-
-
-
-`export_metadata_json_file`
-
-
-Whether to export metadata in json file. If
-True, export the metadata in the same directory as tflite model.Used
-only if `with_metadata` is True.
-
-
-
-`index_to_label`
-
-
-A list that map from index to label class name.
-
-A tf.data.Dataset object that contains a potentially large set of
-elements, where each element is a pair of (input_data, target). The
-`input_data` means the raw input data, like an image, a text etc., while
-the `target` means some ground truth of the raw input data, such as the
-classification label of the image etc.
-
-
-
-`size`
-
-
-The size of the dataset. tf.data.Dataset donesn't support a function
-to get the length directly since it's lazy-loaded and may be infinite.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`num_classes`
-
-
-
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-
-Load ESC50 style audio samples.
-
-ESC50 file structure is expalined in https://github.com/karolpiczak/ESC-50
-Audio files should be put in `${data_path}/audio`
-Metadata file should be put in `${data_path}/meta/esc50.csv`
-
-Note that instead of relying on the `target` field in the CSV, a new
-`index_to_label` mapping is created based on the alphabet order of the
-available categories.
-
-
-
-
-
Args
-
-
-
-`spec`
-
-
-An instance of audio_spec.YAMNet
-
-
-
-`data_path`
-
-
-A string, location of the ESC50 dataset. It should contain at
-
-
-
-`folds`
-
-
-A integer list of selected folds. If empty, all folds will be
-selected.
-
-
-
-`categories`
-
-
-A string list of selected categories. If empty, all categories
-will be selected.
-
-
-
-`shuffle`
-
-
-boolean, if True, random shuffle data.
-
-
-
-`cache`
-
-
-str or boolean. When set to True, intermediate results will be
-cached in ram. When set to a file path in string, intermediate results
-will be cached in this file. Please note that, once file based cache is
-created, changes to the input data will have no effects until the cache
-file is removed or the filename is changed. More details can be found at
-https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-An instance of AudioDataLoader containing audio samples and labels.
-
-
-Load audio files from a data_path.
-
-- The root `data_path` folder contains a number of folders. The name for
-each folder is the name of the audio class.
-
-- Within each folder, there are a number of .wav files. Each .wav file
-corresponds to an example. Each .wav file is mono (single-channel) and has
-the typical 16 bit pulse-code modulation (PCM) encoding.
-
-- .wav files will be resampled to `spec.target_sample_rate` then fed into
-`spec.preprocess_ds` for split and other operations. Normally long wav files
-will be framed into multiple clips. And wav files shorter than a certain
-threshold will be ignored.
-
-
-
-
-
Args
-
-
-
-`spec`
-
-
-instance of `audio_spec.BaseSpec`.
-
-
-
-`data_path`
-
-
-string, location to the audio files.
-
-
-
-`categories`
-
-
-A string list of selected categories. If empty, all categories
-will be selected.
-
-
-
-`shuffle`
-
-
-boolean, if True, random shuffle data.
-
-
-
-`cache`
-
-
-str or boolean. When set to True, intermediate results will be
-cached in ram. When set to a file path in string, intermediate results
-will be cached in this file. Please note that, once file based cache is
-created, changes to the input data will have no effects until the cache
-file is removed or the filename is changed. More details can be found at
-https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-`AudioDataLoader` containing audio spectrogram (or any data type generated
-by `spec.preprocess_ds`) and labels.
-
-
-Generate a shared and batched tf.data.Dataset for training/evaluation.
-
-
-
-
-
-
Args
-
-
-
-`batch_size`
-
-
-A integer, the returned dataset will be batched by this size.
-
-
-
-`is_training`
-
-
-A boolean, when True, the returned dataset will be optionally
-shuffled. Data augmentation, if exists, will also be applied to the
-returned dataset.
-
-
-
-`shuffle`
-
-
-A boolean, when True, the returned dataset will be shuffled to
-create randomness during model training. Only applies when `is_training`
-is set to True.
-
-
-
-`input_pipeline_context`
-
-
-A InputContext instance, used to shared dataset
-among multiple workers when distribution strategy is used.
-
-
-
-`preprocess`
-
-
-Not in use.
-
-
-
-`drop_remainder`
-
-
-boolean, whether the finaly batch drops remainder.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-A TF dataset ready to be consumed by Keras model.
-
-
-Returns the number of audio files in the DataLoader.
-
-Note that one audio file could be framed (mostly via a sliding window of
-fixed size) into None or multiple audio clips during training and
-evaluation.
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/audio_classifier/YamNetSpec.md b/site/en/lite/api_docs/python/tflite_model_maker/audio_classifier/YamNetSpec.md
deleted file mode 100644
index 61e4e699b4..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/audio_classifier/YamNetSpec.md
+++ /dev/null
@@ -1,317 +0,0 @@
-page_type: reference
-description: Model good at detecting environmental sounds, using YAMNet embedding.
-
-
-
-
-
-
-
-The location to save the model checkpoint files.
-
-
-
-`strategy`
-
-
-An instance of TF distribute strategy. If none, it will use the
-default strategy (either SingleDeviceStrategy or the current scoped
-strategy.
-
-
-
-`yamnet_model_handle`
-
-
-Path of the TFHub model for retrining.
-
-
-
-`frame_length`
-
-
-The number of samples in each audio frame. If the audio file
-is shorter than `frame_length`, then the audio file will be ignored.
-
-
-
-`frame_step`
-
-
-The number of samples between two audio frames. This value
-should be smaller than `frame_length`, otherwise some samples will be
-ignored.
-
-
-
-`keep_yamnet_and_custom_heads`
-
-
-Boolean, decides if the final TFLite model
-contains both YAMNet and custom trained classification heads. When set
-to False, only the trained custom head will be preserved.
-
-
-Converts the retrained model to tflite format and saves it.
-
-This method overrides the default `CustomModel._export_tflite` method, and
-include the spectrom extraction in the model.
-
-The exported model has input shape (1, number of wav samples)
-
-
-
-
-
Args
-
-
-
-`model`
-
-
-An instance of the keras classification model to be exported.
-
-
-
-`tflite_filepath`
-
-
-File path to save tflite model.
-
-
-
-`with_metadata`
-
-
-Whether the output tflite model contains metadata.
-
-
-
-`export_metadata_json_file`
-
-
-Whether to export metadata in json file. If
-True, export the metadata in the same directory as tflite model. Used
-only if `with_metadata` is True.
-
-
-
-`index_to_label`
-
-
-A list that map from index to label class name.
-
-A instance of audio_dataloader.DataLoader class.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`validation_data`
-
-
-Validation DataLoader. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Number of samples per training step. If `use_hub_library` is
-False, it represents the base learning rate when train batch size is 256
-and it's linear to the batch size.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`model_dir`
-
-
-The location of the model checkpoint files.
-
-
-
-`do_train`
-
-
-Whether to run training.
-
-
-
-`train_whole_model`
-
-
-Boolean. By default, only the classification head is
-trained. When True, the base model is also trained.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-An instance based on AudioClassifier.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/config.md b/site/en/lite/api_docs/python/tflite_model_maker/config.md
deleted file mode 100644
index a6edc8226f..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/config.md
+++ /dev/null
@@ -1,37 +0,0 @@
-page_type: reference
-description: APIs for the config of TFLite Model Maker.
-
-
-
-
-
-
-
-A list of optimizations to apply when converting the model.
-If not set, use `[Optimize.DEFAULT]` by default.
-
-
-
-`representative_data`
-
-
-A DataLoader holding representative data for
-post-training quantization.
-
-
-
-`quantization_steps`
-
-
-Number of post-training quantization calibration steps
-to run.
-
-
-
-`inference_input_type`
-
-
-Target data type of real-number input arrays. Allows
-for a different type for input arrays. Defaults to None. If set, must be
-be `{tf.float32, tf.uint8, tf.int8}`.
-
-
-
-`inference_output_type`
-
-
-Target data type of real-number output arrays.
-Allows for a different type for output arrays. Defaults to None. If set,
-must be `{tf.float32, tf.uint8, tf.int8}`.
-
-
-
-`supported_ops`
-
-
-Set of OpsSet options supported by the device. Used to Set
-converter.target_spec.supported_ops.
-
-
-
-`supported_types`
-
-
-List of types for constant values on the target device.
-Supported values are types exported by lite.constants. Frequently, an
-optimization choice is driven by the most compact (i.e. smallest) type
-in this list (default [constants.FLOAT]).
-
-
-
-
-APIs to train an image classification model.
-
-
-
-#### Task guide:
-
-
-https://www.tensorflow.org/lite/tutorials/model_maker_image_classification
-
-## Classes
-
-[`class DataLoader`](../tflite_model_maker/image_classifier/DataLoader): DataLoader for image classifier.
-
-[`class ImageClassifier`](../tflite_model_maker/image_classifier/ImageClassifier): ImageClassifier class for inference and exporting to tflite.
-
-[`class ModelSpec`](../tflite_model_maker/image_classifier/ModelSpec): A specification of image model.
-
-## Functions
-
-[`EfficientNetLite0Spec(...)`](../tflite_model_maker/image_classifier/EfficientNetLite0Spec): Creates EfficientNet-Lite0 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`EfficientNetLite1Spec(...)`](../tflite_model_maker/image_classifier/EfficientNetLite1Spec): Creates EfficientNet-Lite1 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`EfficientNetLite2Spec(...)`](../tflite_model_maker/image_classifier/EfficientNetLite2Spec): Creates EfficientNet-Lite2 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`EfficientNetLite3Spec(...)`](../tflite_model_maker/image_classifier/EfficientNetLite3Spec): Creates EfficientNet-Lite3 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`EfficientNetLite4Spec(...)`](../tflite_model_maker/image_classifier/EfficientNetLite4Spec): Creates EfficientNet-Lite4 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`MobileNetV2Spec(...)`](../tflite_model_maker/image_classifier/MobileNetV2Spec): Creates MobileNet v2 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`Resnet50Spec(...)`](../tflite_model_maker/image_classifier/Resnet50Spec): Creates ResNet 50 model spec. See also: tflite_model_maker.image_classifier.ModelSpec.
-
-[`create(...)`](../tflite_model_maker/image_classifier/create): Loads data and retrains the model based on data for image classification.
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/DataLoader.md b/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/DataLoader.md
deleted file mode 100644
index 9d0fdf9a72..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/DataLoader.md
+++ /dev/null
@@ -1,338 +0,0 @@
-page_type: reference
-description: DataLoader for image classifier.
-
-
-
-
-
-
-
-A tf.data.Dataset object that contains a potentially large set of
-elements, where each element is a pair of (input_data, target). The
-`input_data` means the raw input data, like an image, a text etc., while
-the `target` means some ground truth of the raw input data, such as the
-classification label of the image etc.
-
-
-
-`size`
-
-
-The size of the dataset. tf.data.Dataset donesn't support a function
-to get the length directly since it's lazy-loaded and may be infinite.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`num_classes`
-
-
-
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-A list that map from index to label class name.
-
-
-
-`shuffle`
-
-
-Whether the data should be shuffled.
-
-
-
-`hparams`
-
-
-A namedtuple of hyperparameters. This function expects
-.dropout_rate: The fraction of the input units to drop, used in dropout
- layer.
-.do_fine_tuning: If true, the Hub module is trained together with the
- classification layer on top.
-
-
-
-`use_augmentation`
-
-
-Use data augmentation for preprocessing.
-
-
-
-`representative_data`
-
-
- Representative dataset for full integer
-quantization. Used when converting the keras model to the TFLite model
-with full integer quantization.
-
-
-Loads data and retrains the model based on data for image classification.
-
-
-
-
-
-
Args
-
-
-
-`train_data`
-
-
-Training data.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`validation_data`
-
-
-Validation data. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Number of samples per training step. If `use_hub_library` is
-False, it represents the base learning rate when train batch size is 256
-and it's linear to the batch size.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-`train_whole_model`
-
-
-If true, the Hub module is trained together with the
-classification layer on top. Otherwise, only train the top
-classification layer.
-
-
-
-`dropout_rate`
-
-
-The rate for dropout.
-
-
-
-`learning_rate`
-
-
-Base learning rate when train batch size is 256. Linear to
-the batch size.
-
-
-
-`momentum`
-
-
-a Python float forwarded to the optimizer. Only used when
-`use_hub_library` is True.
-
-
-
-`shuffle`
-
-
-Whether the data should be shuffled.
-
-
-
-`use_augmentation`
-
-
-Use data augmentation for preprocessing.
-
-
-
-`use_hub_library`
-
-
-Use `make_image_classifier_lib` from tensorflow hub to
-retrain the model.
-
-
-
-`warmup_steps`
-
-
-Number of warmup steps for warmup schedule on learning rate.
-If None, the default warmup_steps is used which is the total training
-steps in two epochs. Only used when `use_hub_library` is False.
-
-
-
-`model_dir`
-
-
-The location of the model checkpoint files. Only used when
-`use_hub_library` is False.
-
-Validation data. If None, skips validation process.
-
-
-
-`hparams`
-
-
-An instance of hub_lib.HParams or
-train_image_classifier_lib.HParams. Anamedtuple of hyperparameters.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If 'steps_per_epoch' is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-The tf.keras.callbacks.History object returned by tf.keras.Model.fit*().
-
-list of int, input image shape. Default: [224, 224].
-
-
-
-`name`
-
-
-str, model spec name.
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/create.md b/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/create.md
deleted file mode 100644
index b4440fc9f5..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/image_classifier/create.md
+++ /dev/null
@@ -1,223 +0,0 @@
-page_type: reference
-description: Loads data and retrains the model based on data for image classification.
-
-
-
-
-
-
-
-Validation data. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Number of samples per training step. If `use_hub_library` is
-False, it represents the base learning rate when train batch size is 256
-and it's linear to the batch size.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-`train_whole_model`
-
-
-If true, the Hub module is trained together with the
-classification layer on top. Otherwise, only train the top
-classification layer.
-
-
-
-`dropout_rate`
-
-
-The rate for dropout.
-
-
-
-`learning_rate`
-
-
-Base learning rate when train batch size is 256. Linear to
-the batch size.
-
-
-
-`momentum`
-
-
-a Python float forwarded to the optimizer. Only used when
-`use_hub_library` is True.
-
-
-
-`shuffle`
-
-
-Whether the data should be shuffled.
-
-
-
-`use_augmentation`
-
-
-Use data augmentation for preprocessing.
-
-
-
-`use_hub_library`
-
-
-Use `make_image_classifier_lib` from tensorflow hub to
-retrain the model.
-
-
-
-`warmup_steps`
-
-
-Number of warmup steps for warmup schedule on learning rate.
-If None, the default warmup_steps is used which is the total training
-steps in two epochs. Only used when `use_hub_library` is False.
-
-
-
-`model_dir`
-
-
-The location of the model checkpoint files. Only used when
-`use_hub_library` is False.
-
-
-
-`do_train`
-
-
-Whether to run training.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-An instance based on ImageClassifier.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/model_spec.md b/site/en/lite/api_docs/python/tflite_model_maker/model_spec.md
deleted file mode 100644
index c1a46677d2..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/model_spec.md
+++ /dev/null
@@ -1,103 +0,0 @@
-page_type: reference
-description: APIs for the model spec of TFLite Model Maker.
-
-
-
-
-
-
-
-
-
-
-APIs for the model spec of TFLite Model Maker.
-
-
-
-## Functions
-
-[`get(...)`](../tflite_model_maker/model_spec/get): Gets model spec by name or instance, and init with args and kwarges.
-
-
-
-
-
-
-
-
-APIs to train an object detection model.
-
-
-
-## Classes
-
-[`class DataLoader`](../tflite_model_maker/object_detector/DataLoader): DataLoader for object detector.
-
-[`class EfficientDetSpec`](../tflite_model_maker/object_detector/EfficientDetSpec): A specification of the EfficientDet model.
-
-[`class ObjectDetector`](../tflite_model_maker/object_detector/ObjectDetector): ObjectDetector class for inference and exporting to tflite.
-
-## Functions
-
-[`EfficientDetLite0Spec(...)`](../tflite_model_maker/object_detector/EfficientDetLite0Spec): Creates EfficientDet-Lite0 model spec. See also: tflite_model_maker.object_detector.EfficientDetSpec.
-
-[`EfficientDetLite1Spec(...)`](../tflite_model_maker/object_detector/EfficientDetLite1Spec): Creates EfficientDet-Lite1 model spec. See also: tflite_model_maker.object_detector.EfficientDetSpec.
-
-[`EfficientDetLite2Spec(...)`](../tflite_model_maker/object_detector/EfficientDetLite2Spec): Creates EfficientDet-Lite2 model spec. See also: tflite_model_maker.object_detector.EfficientDetSpec.
-
-[`EfficientDetLite3Spec(...)`](../tflite_model_maker/object_detector/EfficientDetLite3Spec): Creates EfficientDet-Lite3 model spec. See also: tflite_model_maker.object_detector.EfficientDetSpec.
-
-[`EfficientDetLite4Spec(...)`](../tflite_model_maker/object_detector/EfficientDetLite4Spec): Creates EfficientDet-Lite4 model spec. See also: tflite_model_maker.object_detector.EfficientDetSpec.
-
-[`create(...)`](../tflite_model_maker/object_detector/create): Loads data and train the model for object detection.
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/object_detector/DataLoader.md b/site/en/lite/api_docs/python/tflite_model_maker/object_detector/DataLoader.md
deleted file mode 100644
index 0c20e2c3f7..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/object_detector/DataLoader.md
+++ /dev/null
@@ -1,527 +0,0 @@
-page_type: reference
-description: DataLoader for object detector.
-
-
-
-
-
-
-
-Glob for tfrecord files. e.g. "/tmp/coco*.tfrecord".
-
-
-
-`size`
-
-
-The size of the dataset.
-
-
-
-`label_map`
-
-
-Variable shows mapping label integers ids to string label
-names. 0 is the reserved key for `background` and doesn't need to be
-included in label_map. Label names can't be duplicated. Supported
-formats are:
-
-1. Dict, map label integers ids to string label names, such as {1:
- 'person', 2: 'notperson'}. 2. List, a list of label names such as
- ['person', 'notperson'] which is
- the same as setting label_map={1: 'person', 2: 'notperson'}.
-3. String, name for certain dataset. Accepted values are: 'coco', 'voc'
- and 'waymo'. 4. String, yaml filename that stores label_map.
-
-
-
-`annotations_json_file`
-
-
-JSON with COCO data format containing golden
-bounding boxes. Used for validation. If None, use the ground truth from
-the dataloader. Refer to
-https://towardsdatascience.com/coco-data-format-for-object-detection-a4c5eaf518c5
- for the description of COCO data format.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-Path to directory that store raw images. If None, the image
-path in the csv file is the path to Google Cloud Storage or the absolute
-path in the local machine.
-
-
-
-`delimiter`
-
-
-Character used to separate fields.
-
-
-
-`quotechar`
-
-
-Character used to quote fields containing special characters.
-
-
-
-`num_shards`
-
-
-Number of shards for output file.
-
-
-
-`max_num_images`
-
-
-Max number of imags to process.
-
-
-
-`cache_dir`
-
-
-The cache directory to save TFRecord, metadata and json file.
-When cache_dir is None, a temporary folder will be created and will not
-be removed automatically after training which makes it can be used
-later.
-
-
-
-`cache_prefix_filename`
-
-
-The cache prefix filename. If None, will
-automatically generate it based on `filename`.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-train_data, validation_data, test_data which are ObjectDetectorDataLoader
-objects. Can be None if without such data.
-
-Variable shows mapping label integers ids to string label
-names. 0 is the reserved key for `background`. Label names can't be
-duplicated. Supported format: 1. Dict, map label integers ids to string
- label names, e.g.
- {1: 'person', 2: 'notperson'}. 2. List, a list of label names. e.g.
- ['person', 'notperson'] which is
- the same as setting label_map={1: 'person', 2: 'notperson'}.
-
-3. String, name for certain dataset. Accepted values are: 'coco', 'voc'
- and 'waymo'. 4. String, yaml filename that stores label_map.
-
-
-
-`annotation_filenames`
-
-
-Collection of annotation filenames (strings) to be
-loaded. For instance, if there're 3 annotation files [0.xml, 1.xml,
-2.xml] in `annotations_dir`, setting annotation_filenames=['0', '1']
-makes this method only load [0.xml, 1.xml].
-
-
-
-`ignore_difficult_instances`
-
-
-Whether to ignore difficult instances.
-`difficult` can be set inside `object` item in the annotation xml file.
-
-
-
-`num_shards`
-
-
-Number of shards for output file.
-
-
-
-`max_num_images`
-
-
-Max number of imags to process.
-
-
-
-`cache_dir`
-
-
-The cache directory to save TFRecord, metadata and json file.
-When cache_dir is not set, a temporary folder will be created and will
-not be removed automatically after training which makes it can be used
-later.
-
-
-
-`cache_prefix_filename`
-
-
-The cache prefix filename. If not set, will
-automatically generate it based on `image_dir`, `annotations_dir` and
-`annotation_filenames`.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-Hyperparameters used to overwrite default configuration. Can be
-
-1) Dict, contains parameter names and values; 2) String, Comma separated
-k=v pairs of hyperparameters; 3) String, yaml filename which's a module
-containing attributes to use as hyperparameters.
-
-
-
-`model_dir`
-
-
-The location to save the model checkpoint files.
-
-
-
-`epochs`
-
-
-Default training epochs.
-
-
-
-`batch_size`
-
-
-Training & Evaluation batch size.
-
-
-
-`steps_per_execution`
-
-
-Number of steps per training execution.
-
-
-
-`moving_average_decay`
-
-
-Float. The decay to use for maintaining moving
-averages of the trained parameters.
-
-
-
-`var_freeze_expr`
-
-
-Expression to freeze variables.
-
-
-
-`tflite_max_detections`
-
-
-The max number of output detections in the TFLite
-model.
-
-
-
-`strategy`
-
-
- A string specifying which distribution strategy to use.
-Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy.
-'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF
-default with OneDeviceStrategy.
-
-
-
-`tpu`
-
-
-The Cloud TPU to use for training. This should be either the name
-used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470
- url.
-
-
-
-`gcp_project`
-
-
-Project name for the Cloud TPU-enabled project. If not
-specified, we will attempt to automatically detect the GCE project from
-metadata.
-
-
-
-`tpu_zone`
-
-
-GCE zone where the Cloud TPU is located in. If not specified, we
-will attempt to automatically detect the GCE project from metadata.
-
-
-
-`use_xla`
-
-
-Use XLA even if strategy is not tpu. If strategy is tpu, always
-use XLA, and this flag has no effect.
-
-
-
-`profile`
-
-
-Enable profile mode.
-
-
-
-`debug`
-
-
-Enable debug mode.
-
-
-
-`tf_random_seed`
-
-
-Fixed random seed for deterministic execution across runs
-for debugging.
-
-
-Converts the retrained model to tflite format and saves it.
-
-The exported TFLite model has the following inputs & outputs:
-One input:
- image: a float32 tensor of shape[1, height, width, 3] containing the
- normalized input image. `self.config.image_size` is [height, width].
-
-
-
-
-
Four Outputs
-
-
-
-`detection_boxes`
-
-
-a float32 tensor of shape [1, num_boxes, 4] with box
-locations.
-
-
-
-`detection_classes`
-
-
-a float32 tensor of shape [1, num_boxes] with class
-indices.
-
-
-
-`detection_scores`
-
-
-a float32 tensor of shape [1, num_boxes] with class
-scores.
-
-
-
-`num_boxes`
-
-
-a float32 tensor of size 1 containing the number of detected
-boxes.
-
-
-
-
-
-
-
-
-
-
Args
-
-
-
-`model`
-
-
-The EfficientDetNet model used for training which doesn't have pre
-and post processing.
-
- Dict, map label integer ids to string label names such as {1:
-'person', 2: 'notperson'}. 0 is the reserved key for `background` and
- doesn't need to be included in `label_map`. Label names can't be
- duplicated.
-
-
-
-`representative_data`
-
-
- Representative dataset for full integer
-quantization. Used when converting the keras model to the TFLite model
-with full interger quantization.
-
-
-Loads data and train the model for object detection.
-
-
-
-
-
-
Args
-
-
-
-`train_data`
-
-
-Training data.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`validation_data`
-
-
-Validation data. If None, skips validation process.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`batch_size`
-
-
-Batch size for training.
-
-
-
-`train_whole_model`
-
-
-Boolean, False by default. If true, train the whole
-model. Otherwise, only train the layers that are not match
-`model_spec.config.var_freeze_expr`.
-
-Validation data. If None, skips validation process.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`batch_size`
-
-
-Batch size for training.
-
-
-
-`train_whole_model`
-
-
-Boolean, False by default. If true, train the whole
-model. Otherwise, only train the layers that are not match
-`model_spec.config.var_freeze_expr`.
-
-
-
-`do_train`
-
-
-Whether to run training.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-An instance based on ObjectDetector.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/question_answer.md b/site/en/lite/api_docs/python/tflite_model_maker/question_answer.md
deleted file mode 100644
index 0dbc9caeb6..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/question_answer.md
+++ /dev/null
@@ -1,52 +0,0 @@
-page_type: reference
-description: APIs to train a model that can answer questions based on a predefined text.
-
-
-
-
-
-
-
-
-
-
-APIs to train a model that can answer questions based on a predefined text.
-
-
-
-#### Task guide:
-
-
-https://www.tensorflow.org/lite/tutorials/model_maker_question_answer
-
-## Classes
-
-[`class BertQaSpec`](../tflite_model_maker/question_answer/BertQaSpec): A specification of BERT model for question answering.
-
-[`class DataLoader`](../tflite_model_maker/question_answer/DataLoader): DataLoader for question answering.
-
-[`class QuestionAnswer`](../tflite_model_maker/question_answer/QuestionAnswer): QuestionAnswer class for inference and exporting to tflite.
-
-## Functions
-
-[`MobileBertQaSpec(...)`](../tflite_model_maker/question_answer/MobileBertQaSpec): Creates MobileBert model spec for the question answer task. See also: tflite_model_maker.question_answer.BertQaSpec.
-
-[`MobileBertQaSquadSpec(...)`](../tflite_model_maker/question_answer/MobileBertQaSquadSpec): Creates MobileBert model spec that's already retrained on SQuAD1.1 for the question answer task. See also: tflite_model_maker.question_answer.BertQaSpec.
-
-[`create(...)`](../tflite_model_maker/question_answer/create): Loads data and train the model for question answer.
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/BertQaSpec.md b/site/en/lite/api_docs/python/tflite_model_maker/question_answer/BertQaSpec.md
deleted file mode 100644
index d19c210628..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/BertQaSpec.md
+++ /dev/null
@@ -1,629 +0,0 @@
-page_type: reference
-description: A specification of BERT model for question answering.
-
-
-
-
-
-
-
-The stride when we do a sliding window approach to take chunks
-of the documents.
-
-
-
-`dropout_rate`
-
-
-The rate for dropout.
-
-
-
-`initializer_range`
-
-
-The stdev of the truncated_normal_initializer for
-initializing all weight matrices.
-
-
-
-`learning_rate`
-
-
-The initial learning rate for Adam.
-
-
-
-`distribution_strategy`
-
-
- A string specifying which distribution strategy to
-use. Accepted values are 'off', 'one_device', 'mirrored',
-'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
-insensitive. 'off' means not to use Distribution Strategy; 'tpu' means
-to use TPUStrategy using `tpu_address`.
-
-
-
-`num_gpus`
-
-
-How many GPUs to use at each worker with the
-DistributionStrategies API. The default is -1, which means utilize all
-available GPUs.
-
-
-
-`tpu`
-
-
-TPU address to connect to.
-
-
-
-`trainable`
-
-
-boolean, whether pretrain layer is trainable.
-
-
-
-`predict_batch_size`
-
-
-Batch size for prediction.
-
-
-
-`do_lower_case`
-
-
-boolean, whether to lower case the input text. Should be
-True for uncased models and False for cased models.
-
-
-
-`is_tf2`
-
-
-boolean, whether the hub module is in TensorFlow 2.x format.
-
-
-
-`tflite_input_name`
-
-
-Dict, input names for the TFLite model.
-
-
-
-`tflite_output_name`
-
-
-Dict, output names for the TFLite model.
-
-
-
-`init_from_squad_model`
-
-
-boolean, whether to initialize from the model that
-is already retrained on Squad 1.1.
-
-tf.data.Dataset, training data to be fed in
-tf.keras.Model.fit().
-
-
-
-`epochs`
-
-
-Integer, training epochs.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-`**kwargs`
-
-
-Other parameters used in the tf.keras.Model.fit().
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-tf.keras.Model, the keras model that's already trained.
-
-A tf.data.Dataset object that contains a potentially large set of
-elements, where each element is a pair of (input_data, target). The
-`input_data` means the raw input data, like an image, a text etc., while
-the `target` means some ground truth of the raw input data, such as the
-classification label of the image etc.
-
-
-
-`size`
-
-
-The size of the dataset. tf.data.Dataset donesn't support a function
-to get the length directly since it's lazy-loaded and may be infinite.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-The stride when we do a sliding window approach to take chunks
-of the documents.
-
-
-
-`dropout_rate`
-
-
-The rate for dropout.
-
-
-
-`initializer_range`
-
-
-The stdev of the truncated_normal_initializer for
-initializing all weight matrices.
-
-
-
-`learning_rate`
-
-
-The initial learning rate for Adam.
-
-
-
-`distribution_strategy`
-
-
- A string specifying which distribution strategy to
-use. Accepted values are 'off', 'one_device', 'mirrored',
-'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
-insensitive. 'off' means not to use Distribution Strategy; 'tpu' means
-to use TPUStrategy using `tpu_address`.
-
-
-
-`num_gpus`
-
-
-How many GPUs to use at each worker with the
-DistributionStrategies API. The default is -1, which means utilize all
-available GPUs.
-
-
-
-`tpu`
-
-
-TPU address to connect to.
-
-
-
-`trainable`
-
-
-boolean, whether pretrain layer is trainable.
-
-
-
-`predict_batch_size`
-
-
-Batch size for prediction.
-
-
-
-`do_lower_case`
-
-
-boolean, whether to lower case the input text. Should be
-True for uncased models and False for cased models.
-
-
-
-`is_tf2`
-
-
-boolean, whether the hub module is in TensorFlow 2.x format.
-
-
-
-`tflite_input_name`
-
-
-Dict, input names for the TFLite model.
-
-
-
-`tflite_output_name`
-
-
-Dict, output names for the TFLite model.
-
-
-
-`init_from_squad_model`
-
-
-boolean, whether to initialize from the model that
-is already retrained on Squad 1.1.
-
-
-
-`default_batch_size`
-
-
-Default batch size for training.
-
-
-
-`name`
-
-
-Name of the object.
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/MobileBertQaSquadSpec.md b/site/en/lite/api_docs/python/tflite_model_maker/question_answer/MobileBertQaSquadSpec.md
deleted file mode 100644
index 8bacef70a7..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/MobileBertQaSquadSpec.md
+++ /dev/null
@@ -1,214 +0,0 @@
-page_type: reference
-description: Creates MobileBert model spec that's already retrained on SQuAD1.1 for the question answer task. See also: tflite_model_maker.question_answer.BertQaSpec.
-
-
-
-
-
-
-
-The stride when we do a sliding window approach to take chunks
-of the documents.
-
-
-
-`dropout_rate`
-
-
-The rate for dropout.
-
-
-
-`initializer_range`
-
-
-The stdev of the truncated_normal_initializer for
-initializing all weight matrices.
-
-
-
-`learning_rate`
-
-
-The initial learning rate for Adam.
-
-
-
-`distribution_strategy`
-
-
- A string specifying which distribution strategy to
-use. Accepted values are 'off', 'one_device', 'mirrored',
-'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
-insensitive. 'off' means not to use Distribution Strategy; 'tpu' means
-to use TPUStrategy using `tpu_address`.
-
-
-
-`num_gpus`
-
-
-How many GPUs to use at each worker with the
-DistributionStrategies API. The default is -1, which means utilize all
-available GPUs.
-
-
-
-`tpu`
-
-
-TPU address to connect to.
-
-
-
-`trainable`
-
-
-boolean, whether pretrain layer is trainable.
-
-
-
-`predict_batch_size`
-
-
-Batch size for prediction.
-
-
-
-`do_lower_case`
-
-
-boolean, whether to lower case the input text. Should be
-True for uncased models and False for cased models.
-
-
-
-`is_tf2`
-
-
-boolean, whether the hub module is in TensorFlow 2.x format.
-
-
-
-`tflite_input_name`
-
-
-Dict, input names for the TFLite model.
-
-
-
-`tflite_output_name`
-
-
-Dict, output names for the TFLite model.
-
-
-
-`init_from_squad_model`
-
-
-boolean, whether to initialize from the model that
-is already retrained on Squad 1.1.
-
-
-
-`default_batch_size`
-
-
-Default batch size for training.
-
-
-
-`name`
-
-
-Name of the object.
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/QuestionAnswer.md b/site/en/lite/api_docs/python/tflite_model_maker/question_answer/QuestionAnswer.md
deleted file mode 100644
index ad720c2913..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/question_answer/QuestionAnswer.md
+++ /dev/null
@@ -1,514 +0,0 @@
-page_type: reference
-description: QuestionAnswer class for inference and exporting to tflite.
-
-
-
-
-
-
-
-
-Loads data and train the model for question answer.
-
-
-
-
-
-
Args
-
-
-
-`train_data`
-
-
-Training data.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`batch_size`
-
-
-Batch size for training.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-list of dict, each vocab item is described above.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-
-Generates data loader from movielens dataset.
-
-The method downloads and prepares dataset, then generates for train/eval.
-
-For `movielens` data format, see:
-
-- function `_generate_fake_data` in `recommendation_testutil.py`
-- Or, zip file: http://files.grouplens.org/datasets/movielens/ml-1m.zip
-
-
-
-
-
Args
-
-
-
-`data_dir`
-
-
-str, path to dataset containing (unzipped) text data.
-
-
-
-`data_tag`
-
-
-str, specify dataset in {'train', 'test'}.
-
-
-
-`input_spec`
-
-
-InputSpec, specify data format for input and embedding.
-
-
-
-`generated_examples_dir`
-
-
-str, path to generate preprocessed examples.
-(default: same as data_dir)
-
-
-
-`min_timeline_length`
-
-
-int, min timeline length to split train/eval set.
-
-
-
-`max_context_length`
-
-
-int, max context length as one input.
-
-
-
-`max_context_movie_genre_length`
-
-
-int, max context length of movie genre as
-one input.
-
-@classmethod
-get_num_classes(
- meta
-) -> int
-
-
-Gets number of classes.
-
-0 is reserved. Number of classes is Max Id + 1, e.g., if Max Id = 100,
-then classes are [0, 100], that is 101 classes in total.
-
-
-
-
-Loads vocab from file.
-
-The vocab file should be json format of: a list of list[size=4], where the 4
-elements are ordered as:
- [id=int, title=str, genres=str joined with '|', count=int]
-It is generated when preparing movielens dataset.
-
-
-
-
-
Args
-
-
-
-`vocab_file`
-
-
-str, path to vocab file.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-
-`vocab`
-
-
-an OrderedDict maps id to item. Each item represents a movie
-{
- 'id': int,
- 'title': str,
- 'genres': list[str],
- 'count': int,
-}
-
-Embedding dataset used to build on-device ScaNN index file. The
-dataset shape should be (dataset_size, embedding_dim). If None,
-`dataset` will be generated from raw input data later.
-
-
-
-`metadata`
-
-
- The metadata for each data in the dataset. The length of
-`metadata` should be same as `dataset` and passed in the same order as
-`dataset`. If `dataset` is set, `metadata` should be set as well.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`dataset`
-
-
-Gets the dataset.
-
-Due to performance consideration, we don't return a copy, but the returned
-`self._dataset` should never be changed.
-
-
-
-`embedder_path`
-
-
-Gets the path to the TFLite Embedder model file.
-
-
-Appends the dataset.
-
-Don't check if embedders from the two data loader are the same in this
-function. Users are responsible to keep the embedder identical.
-
-
-
-
-
Args
-
-
-
-`data_loader`
-
-
-The data loader in which the data will be appended.
-
-
-Appends the dataset.
-
-Don't check if embedders from the two data loader are the same in this
-function. Users are responsible to keep the embedder identical.
-
-
-
-
-
Args
-
-
-
-`data_loader`
-
-
-The data loader in which the data will be appended.
-
-
-Creates DataLoader for the Image Searcher task.
-
-
-
-
-
-
Args
-
-
-
-`image_embedder_path`
-
-
-Path to the ".tflite" image embedder model.
-
-
-
-`metadata_type`
-
-
-Type of MetadataLoader to load metadata for each input
-image based on image path. By default, load the file name as metadata
-for each input image.
-
-
-
-`l2_normalize`
-
-
-Whether to normalize the returned feature vector with L2
-norm. Use this option only if the model does not already contain a
-native L2_NORMALIZATION TF Lite Op. In most cases, this is already the
-case and L2 norm is thus achieved through TF Lite inference.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-DataLoader object created for the Image Searcher task.
-
-
-Loads image data from folder.
-
-Users can load images from different folders one by one. For instance,
-
-```
-# Creates data_loader instance.
-data_loader = image_searcher_dataloader.DataLoader.create(tflite_path)
-
-# Loads images, first from `image_path1` and secondly from `image_path2`.
-data_loader.load_from_folder(image_path1)
-data_loader.load_from_folder(image_path2)
-```
-
-
-
-
-
Args
-
-
-
-`path`
-
-
-image directory to be loaded.
-
-
-
-`mode`
-
-
-mode in which the file is opened, Used when metadata_type is
-FROM_DAT_FILE. Only 'r' and 'rb' are supported. 'r' means opening for
-reading, 'rb' means opening for reading binary.
-
-
-
-ScaNN
-(https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html) is
-a highly efficient and scalable vector nearest neighbor retrieval
-library from Google Research. We use ScaNN to build the on-device search
-index, and do on-device retrieval with a simplified implementation.
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`distance_measure`
-
-
-How to compute the distance. Allowed values are
-'dot_product' and 'squared_l2'. Please note that when distance is
-'dot_product', we actually compute the negative dot product between query
-and database vectors, to preserve the notion that "smaller is closer".
-
-
-
-`tree`
-
-
-Configure partitioning. If not set, no partitioning is performed.
-
-
-
-`score_ah`
-
-
-Configure asymmetric hashing. Must defined this or
-`score_brute_force`.
-
-
-
-`score_brute_force`
-
-
-Configure bruce force. Must defined this or `score_ah`.
-
-
-
-In ScaNN we use PQ to compress the database embeddings, but not the query
-embedding. We called it Asymmetric Hashing. See
-https://research.google/pubs/pub41694/
-
-
-
-
-
-
-
Attributes
-
-
-
-`dimensions_per_block`
-
-
-How many dimensions in each PQ block. If the embedding
-vector dimensionality is a multiple of this value, there will be
-`number_of_dimensions / dimensions_per_block` PQ blocks. Otherwise, the
-last block will be the remainder. For example, if a vector has 12
-dimensions, and `dimensions_per_block` is 2, then there will be 6
-2-dimension blocks. However, if the vector has 13 dimensions and
-`dimensions_per_block` is still 2, there will be 6 2-dimension blocks and
-one 1-dimension block.
-
-
-
-`anisotropic_quantization_threshold`
-
-
-If this value is set, we will penalize
-the quantization error that's parallel to the original vector differently
-than the orthogonal error. A generally recommended value for this
-parameter would be 0.2. For more details, please look at ScaNN's 2020 ICML
-paper https://arxiv.org/abs/1908.10396 and the Google AI Blog post
-https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html
-
-
-
-`training_sample_size`
-
-
-How many database points to sample for training the
-K-Means for PQ centers. A good starting value would be 100k or the whole
-dataset if it's smaller than that.
-
-The cache directory to save serialized ScaNN and/or the tflite
-model. When cache_dir is not set, a temporary folder will be created and
-will **not** be removed automatically which makes it can be used later.
-
-
-Appends the dataset.
-
-Don't check if embedders from the two data loader are the same in this
-function. Users are responsible to keep the embedder identical.
-
-
-
-
-
Args
-
-
-
-`data_loader`
-
-
-The data loader in which the data will be appended.
-
-
-Creates DataLoader for the Text Searcher task.
-
-
-
-
-
-
Args
-
-
-
-`text_embedder_path`
-
-
-Path to the ".tflite" text embedder model. case and L2
-norm is thus achieved through TF Lite inference.
-
-
-
-`l2_normalize`
-
-
-Whether to normalize the returned feature vector with L2
-norm. Use this option only if the model does not already contain a
-native L2_NORMALIZATION TF Lite Op. In most cases, this is already the
-case and L2 norm is thus achieved through TF Lite inference.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-DataLoader object created for the Text Searcher task.
-
-
-Loads text data from csv file that includes a "header" line with titles.
-
-Users can load text from different csv files one by one. For instance,
-
-```
-# Creates data_loader instance.
-data_loader = text_searcher_dataloader.DataLoader.create(tflite_path)
-
-# Loads text, first from `text_path1` and secondly from `text_path2`.
-data_loader.load_from_csv(
- text_path1, text_column='text', metadata_column='metadata')
-data_loader.load_from_csv(
- text_path2, text_column='text', metadata_column='metadata')
-```
-
-
-
-
-
Args
-
-
-
-`path`
-
-
-Text csv file path to be loaded.
-
-
-
-`text_column`
-
-
-Column name for input text.
-
-
-
-`metadata_column`
-
-
-Column name for user metadata associated with each input
-text.
-
-
-
-`delimiter`
-
-
-Character used to separate fields.
-
-
-
-`quotechar`
-
-
-Character used to quote fields containing special characters.
-
-
-
-In ScaNN, we use single layer K-Means tree to partition the database (index)
-as a way to reduce search space.
-
-
-
-
-
-
-
Attributes
-
-
-
-`num_leaves`
-
-
-How many leaves (partitions) to have on the K-Means tree. In
-general, a good starting point would be the square root of the database
-size.
-
-
-
-`num_leaves_to_search`
-
-
-During inference ScaNN will compare the query vector
-against all the partition centroids and select the closest
-`num_leaves_to_search` ones to search in. The more leaves to search, the
-better the retrieval quality, and higher computational cost.
-
-
-
-`training_sample_size`
-
-
-How many database embeddings to sample for the K-Means
-training. Generally, you want to use a large enough sample of the database
-to train K-Means so that it's representative enough. However, large sample
-can also lead to longer training time. A good starting value would be
-100k, or the whole dataset if it's smaller than that.
-
-
-
-`min_partition_size`
-
-
-Smallest allowable cluster size. Any clusters smaller
-than this will be removed, and its data points will be merged with other
-clusters. Recommended to be 1/10 of average cluster size (size of database
-divided by `num_leaves`)
-
-
-
-`training_iterations`
-
-
-How many itrations to train K-Means.
-
-
-
-`spherical`
-
-
-If true, L2 normalize the K-Means centroids.
-
-
-
-`quantize_centroids`
-
-
-If true, quantize centroids to int8.
-
-
-
-`random_init`
-
-
-If true, use random init. Otherwise use K-Means++.
-
-
-
-
-
-
-## Methods
-
-
__eq__
-
-
-__eq__(
- other
-)
-
-
-
-
-
-
-
-
-
-
-
-
-
Class Variables
-
-
-
-min_partition_size
-
-
-`50`
-
-
-
-quantize_centroids
-
-
-`False`
-
-
-
-random_init
-
-
-`True`
-
-
-
-spherical
-
-
-`False`
-
-
-
-training_iterations
-
-
-`12`
-
-
-
-training_sample_size
-
-
-`100000`
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier.md b/site/en/lite/api_docs/python/tflite_model_maker/text_classifier.md
deleted file mode 100644
index a29756c82f..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier.md
+++ /dev/null
@@ -1,52 +0,0 @@
-page_type: reference
-description: APIs to train a text classification model.
-
-
-
-
-
-
-
-
-
-
-APIs to train a text classification model.
-
-
-
-#### Task guide:
-
-
-https://www.tensorflow.org/lite/tutorials/model_maker_text_classification
-
-## Classes
-
-[`class AverageWordVecSpec`](../tflite_model_maker/text_classifier/AverageWordVecSpec): A specification of averaging word vector model.
-
-[`class BertClassifierSpec`](../tflite_model_maker/text_classifier/BertClassifierSpec): A specification of BERT model for text classification.
-
-[`class DataLoader`](../tflite_model_maker/text_classifier/DataLoader): DataLoader for text classifier.
-
-[`class TextClassifier`](../tflite_model_maker/text_classifier/TextClassifier): TextClassifier class for inference and exporting to tflite.
-
-## Functions
-
-[`MobileBertClassifierSpec(...)`](../tflite_model_maker/text_classifier/MobileBertClassifierSpec): Creates MobileBert model spec for the text classification task. See also: tflite_model_maker.text_classifier.BertClassifierSpec.
-
-[`create(...)`](../tflite_model_maker/text_classifier/create): Loads data and train the model for test classification.
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/AverageWordVecSpec.md b/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/AverageWordVecSpec.md
deleted file mode 100644
index a8741b8983..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/AverageWordVecSpec.md
+++ /dev/null
@@ -1,359 +0,0 @@
-page_type: reference
-description: A specification of averaging word vector model.
-
-
-
-
-
-
-
-The stdev of the truncated_normal_initializer for
-initializing all weight matrices.
-
-
-
-`learning_rate`
-
-
-The initial learning rate for Adam.
-
-
-
-`distribution_strategy`
-
-
- A string specifying which distribution strategy to
-use. Accepted values are 'off', 'one_device', 'mirrored',
-'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
-insensitive. 'off' means not to use Distribution Strategy; 'tpu' means
-to use TPUStrategy using `tpu_address`.
-
-
-
-`num_gpus`
-
-
-How many GPUs to use at each worker with the
-DistributionStrategies API. The default is -1, which means utilize all
-available GPUs.
-
-
-
-`tpu`
-
-
-TPU address to connect to.
-
-
-
-`trainable`
-
-
-boolean, whether pretrain layer is trainable.
-
-
-
-`do_lower_case`
-
-
-boolean, whether to lower case the input text. Should be
-True for uncased models and False for cased models.
-
-
-
-`is_tf2`
-
-
-boolean, whether the hub module is in TensorFlow 2.x format.
-
-
-
-`name`
-
-
-The name of the object.
-
-
-
-`tflite_input_name`
-
-
-Dict, input names for the TFLite model.
-
-
-
-`default_batch_size`
-
-
-Default batch size for training.
-
-
-
-`index_to_label`
-
-
-List of labels in the training data. e.g. ['neg', 'pos'].
-
-
-Creates classifier and runs the classifier training.
-
-
-
-
-
-
Args
-
-
-
-`train_ds`
-
-
-tf.data.Dataset, training data to be fed in
-tf.keras.Model.fit().
-
-
-
-`validation_ds`
-
-
-tf.data.Dataset, validation data to be fed in
-tf.keras.Model.fit().
-
-
-
-`epochs`
-
-
-Integer, training epochs.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-`num_classes`
-
-
-Interger, number of classes.
-
-
-
-`**kwargs`
-
-
-Other parameters used in the tf.keras.Model.fit().
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-tf.keras.Model, the keras model that's already trained.
-
-A tf.data.Dataset object that contains a potentially large set of
-elements, where each element is a pair of (input_data, target). The
-`input_data` means the raw input data, like an image, a text etc., while
-the `target` means some ground truth of the raw input data, such as the
-classification label of the image etc.
-
-
-
-`size`
-
-
-The size of the dataset. tf.data.Dataset donesn't support a function
-to get the length directly since it's lazy-loaded and may be infinite.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`num_classes`
-
-
-
-
-
-
-`size`
-
-
-Returns the size of the dataset.
-
-Note that this function may return None becuase the exact size of the
-dataset isn't a necessary parameter to create an instance of this class,
-and tf.data.Dataset donesn't support a function to get the length directly
-since it's lazy-loaded and may be infinite.
-In most cases, however, when an instance of this class is created by helper
-functions like 'from_folder', the size of the dataset will be preprocessed,
-and this function can return an int representing the size of the dataset.
-
-
-Loads text with labels and preproecess text according to `model_spec`.
-
-Assume the text data of the same label are in the same subdirectory. each
-file is one text.
-
-
-
-
-
Args
-
-
-
-`filename`
-
-
-Name of the file.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`is_training`
-
-
-Whether the loaded data is for training or not.
-
-
-
-`class_labels`
-
-
-Class labels that should be considered. Name of the
-subdirectory not in `class_labels` will be ignored. If None, all the
-subdirectories will be considered.
-
-
-
-`shuffle`
-
-
-boolean, if shuffle, random shuffle data.
-
-
-
-`cache_dir`
-
-
-The cache directory to save preprocessed data. If None,
-generates a temporary directory to cache preprocessed data.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-TextDataset containing text, labels and other related info.
-
-The stdev of the truncated_normal_initializer for
-initializing all weight matrices.
-
-
-
-`learning_rate`
-
-
-The initial learning rate for Adam.
-
-
-
-`distribution_strategy`
-
-
- A string specifying which distribution strategy to
-use. Accepted values are 'off', 'one_device', 'mirrored',
-'parameter_server', 'multi_worker_mirrored', and 'tpu' -- case
-insensitive. 'off' means not to use Distribution Strategy; 'tpu' means
-to use TPUStrategy using `tpu_address`.
-
-
-
-`num_gpus`
-
-
-How many GPUs to use at each worker with the
-DistributionStrategies API. The default is -1, which means utilize all
-available GPUs.
-
-
-
-`tpu`
-
-
-TPU address to connect to.
-
-
-
-`trainable`
-
-
-boolean, whether pretrain layer is trainable.
-
-
-
-`do_lower_case`
-
-
-boolean, whether to lower case the input text. Should be
-True for uncased models and False for cased models.
-
-
-
-`is_tf2`
-
-
-boolean, whether the hub module is in TensorFlow 2.x format.
-
-
-
-`name`
-
-
-The name of the object.
-
-
-
-`tflite_input_name`
-
-
-Dict, input names for the TFLite model.
-
-
-
-`default_batch_size`
-
-
-Default batch size for training.
-
-
-
-`index_to_label`
-
-
-List of labels in the training data. e.g. ['neg', 'pos'].
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/TextClassifier.md b/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/TextClassifier.md
deleted file mode 100644
index 9766b8c687..0000000000
--- a/site/en/lite/api_docs/python/tflite_model_maker/text_classifier/TextClassifier.md
+++ /dev/null
@@ -1,535 +0,0 @@
-page_type: reference
-description: TextClassifier class for inference and exporting to tflite.
-
-
-
-
-
-
-
-
-Loads data and train the model for test classification.
-
-
-
-
-
-
Args
-
-
-
-`train_data`
-
-
-Training data.
-
-
-
-`model_spec`
-
-
-Specification for the model.
-
-
-
-`validation_data`
-
-
-Validation data. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Batch size for training.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-Validation data. If None, skips validation process.
-
-
-
-`batch_size`
-
-
-Batch size for training.
-
-
-
-`epochs`
-
-
-Number of epochs for training.
-
-
-
-`steps_per_epoch`
-
-
-Integer or None. Total number of steps (batches of
-samples) before declaring one epoch finished and starting the next
-epoch. If `steps_per_epoch` is None, the epoch will run until the input
-dataset is exhausted.
-
-
-
-`shuffle`
-
-
-Whether the data should be shuffled.
-
-
-
-`do_train`
-
-
-Whether to run training.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-An instance based on TextClassifier.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_support.md b/site/en/lite/api_docs/python/tflite_support.md
deleted file mode 100644
index bcd1c4691a..0000000000
--- a/site/en/lite/api_docs/python/tflite_support.md
+++ /dev/null
@@ -1,54 +0,0 @@
-page_type: reference
-description: The TensorFlow Lite Support Library.
-
-
-
-
-
-
-
-
-
-
-TensorFlow Lite metadata tools.
-
-
-
-## Classes
-
-[`class MetadataDisplayer`](../tflite_support/metadata/MetadataDisplayer): Displays metadata and associated file info in human-readable format.
-
-[`class MetadataPopulator`](../tflite_support/metadata/MetadataPopulator): Packs metadata and associated files into TensorFlow Lite model file.
-
-## Functions
-
-[`convert_to_json(...)`](../tflite_support/metadata/convert_to_json): Converts the metadata into a json string.
-
-[`get_metadata_buffer(...)`](../tflite_support/metadata/get_metadata_buffer): Returns the metadata in the model file as a buffer.
-
-[`get_path_to_datafile(...)`](../tflite_support/metadata/get_path_to_datafile): Gets the path to the specified file in the data dependencies.
diff --git a/site/en/lite/api_docs/python/tflite_support/metadata/MetadataDisplayer.md b/site/en/lite/api_docs/python/tflite_support/metadata/MetadataDisplayer.md
deleted file mode 100644
index 60746e88b8..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/metadata/MetadataDisplayer.md
+++ /dev/null
@@ -1,300 +0,0 @@
-page_type: reference
-description: Displays metadata and associated file info in human-readable format.
-
-
-
-
-
-
-
-
-
-
-
-
-MetadataPopulator can be used to populate metadata and model associated files
-into a model file or a model buffer (in bytearray). It can also help to
-inspect list of files that have been packed into the model or are supposed to
-be packed into the model.
-
-The metadata file (or buffer) should be generated based on the metadata
-schema:
-third_party/tensorflow/lite/schema/metadata_schema.fbs
-
-#### Example usage:
-
-
-Populate matadata and label file into an image classifier model.
-
-First, based on metadata_schema.fbs, generate the metadata for this image
-classifer model using Flatbuffers API. Attach the label file onto the ouput
-tensor (the tensor of probabilities) in the metadata.
-
-Then, pack the metadata and label file into the model as follows.
-
- ```python
- # Populating a metadata file (or a metadta buffer) and associated files to
- a model file:
- populator = MetadataPopulator.with_model_file(model_file)
- # For metadata buffer (bytearray read from the metadata file), use:
- # populator.load_metadata_buffer(metadata_buf)
- populator.load_metadata_file(metadata_file)
- populator.load_associated_files([label.txt])
- # For associated file buffer (bytearray read from the file), use:
- # populator.load_associated_file_buffers({"label.txt": b"file content"})
- populator.populate()
-
- # Populating a metadata file (or a metadta buffer) and associated files to
- a model buffer:
- populator = MetadataPopulator.with_model_buffer(model_buf)
- populator.load_metadata_file(metadata_file)
- populator.load_associated_files([label.txt])
- populator.populate()
- # Writing the updated model buffer into a file.
- updated_model_buf = populator.get_model_buffer()
- with open("updated_model.tflite", "wb") as f:
- f.write(updated_model_buf)
-
- # Transferring metadata and associated files from another TFLite model:
- populator = MetadataPopulator.with_model_buffer(model_buf)
- populator_dst.load_metadata_and_associated_files(src_model_buf)
- populator_dst.populate()
- updated_model_buf = populator.get_model_buffer()
- with open("updated_model.tflite", "wb") as f:
- f.write(updated_model_buf)
- ```
-
-Note that existing metadata buffer (if applied) will be overridden by the new
-metadata buffer.
-
-
-
-
-
Args
-
-
-
-`model_file`
-
-
-valid path to a TensorFlow Lite model file.
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`IOError`
-
-
-File not found.
-
-
-
-`ValueError`
-
-
-the model does not have the expected flatbuffer identifer.
-
-
-Gets a list of associated files recorded in metadata of the model file.
-
-Associated files may be attached to a model, a subgraph, or an input/output
-tensor.
-
-
-
-
-Loads the associated file buffers (in bytearray) to be populated.
-
-
-
-
-
-
Args
-
-
-
-`associated_files`
-
-
-a dictionary of associated file names and corresponding
-file buffers, such as {"file.txt": b"file content"}. If pass in file
- paths for the file name, only the basename will be populated.
-
-
-
-
-
-
-The path is relative to the file calling the function.
-
-It's a simple replacement of
-"tensorflow.python.platform.resource_loader.get_path_to_datafile".
-
-
-
-
-
Args
-
-
-
-`path`
-
-
-a string resource path relative to the calling file.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-The path to the specified file present in the data attribute of py_test
-or py_binary.
-
-
-
-
-TF Lite Metadata Writer API.
-
-
-This module provides interfaces for writing metadata for common model types
-supported by the task library, such as:
-
- * Image classification
- * Object detection
- * Image segmentation
- * (Bert) Natural language classification
- * Audio classification
-
-It is provided as part of the `tflite-support` package:
-
-```
-pip install tflite-support
-```
-
-Learn more about this API in the [metadata writer
-tutorial](https://www.tensorflow.org/lite/convert/metadata_writer_tutorial).
-
-## Modules
-
-[`audio_classifier`](../tflite_support/metadata_writers/audio_classifier) module: Writes metadata and label file to the audio classifier models.
-
-[`bert_nl_classifier`](../tflite_support/metadata_writers/bert_nl_classifier) module: Writes metadata and label file to the Bert NL classifier models.
-
-[`image_classifier`](../tflite_support/metadata_writers/image_classifier) module: Writes metadata and label file to the image classifier models.
-
-[`image_segmenter`](../tflite_support/metadata_writers/image_segmenter) module: Writes metadata and label file to the image segmenter models.
-
-[`metadata_info`](../tflite_support/metadata_writers/metadata_info) module: Helper classes for common model metadata information.
-
-[`nl_classifier`](../tflite_support/metadata_writers/nl_classifier) module: Writes metadata and label file to the NL classifier models.
-
-[`object_detector`](../tflite_support/metadata_writers/object_detector) module: Writes metadata and label file to the object detector models.
-
-[`writer_utils`](../tflite_support/metadata_writers/writer_utils) module: Helper methods for writing metadata into TFLite models.
diff --git a/site/en/lite/api_docs/python/tflite_support/metadata_writers/audio_classifier.md b/site/en/lite/api_docs/python/tflite_support/metadata_writers/audio_classifier.md
deleted file mode 100644
index a16506d8ae..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/metadata_writers/audio_classifier.md
+++ /dev/null
@@ -1,43 +0,0 @@
-page_type: reference
-description: Writes metadata and label file to the audio classifier models.
-
-
-
-
-
-
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`sample_rate`
-
-
-the sample rate in Hz when the audio was captured.
-
-
-
-`channels`
-
-
-the channel count of the audio.
-
-
-
-`label_file_paths`
-
-
-paths to the label files [1] in the classification
-tensor. Pass in an empty list if the model does not have any label file.
-
-
-
-`score_calibration_md`
-
-
-information of the score calibration operation [2]
- in the classification tensor. Optional if the model does not use score
- calibration.
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-[2]:
- https://github.com/tensorflow/tflite-support/blob/5e0cdf5460788c481f5cd18aab8728ec36cf9733/tensorflow_lite_support/metadata/metadata_schema.fbs#L434
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Creates a MetadataWriter instance for multihead models.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`general_md`
-
-
-general information about the model. If not specified, default
-general metadata will be generated.
-
-
-
-`input_md`
-
-
-input audio tensor informaton. If not specified, default input
-metadata will be generated.
-
-
-
-`output_md_list`
-
-
-information of each output tensor head. If not specified,
- default metadata will be generated for each output tensor. If
- `tensor_name` in each `ClassificationTensorMd` instance is not
- specified, elements in `output_md_list` need to have one-to-one mapping
- with the output tensors [1] in the TFLite model.
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b2a509716a2d71dfff706468680a729cc1604cff/tensorflow_lite_support/metadata/metadata_schema.fbs#L605-L612
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-`ids_name`, `mask_name`, and `segment_name` correspond to the Tensor.name
-in the TFLite schema, which help to determine the tensor order when
-populating metadata. The default values come from Model Maker.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`tokenizer_md`
-
-
-information of the tokenizer used to process the input
-string, if any. Supported tokenziers are: `BertTokenizer` [1] and
- `SentencePieceTokenizer` [2]. If the tokenizer is `RegexTokenizer`
- [3], refer to nl_classifier.MetadataWriter.
-[1]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L436
-[2]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L473
-[3]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L475
-
-
-
-`label_file_paths`
-
-
-paths to the label files [4] in the classification
-tensor. Pass in an empty list if the model does not have any label file.
-[4]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-
-
-
-`ids_name`
-
-
-name of the ids tensor, which represents the tokenized ids of
-the input text.
-
-
-
-`mask_name`
-
-
-name of the mask tensor, which represents the mask with 1 for
-real tokens and 0 for padding tokens.
-
-
-
-`segment_name`
-
-
-name of the segment ids tensor, where `0` stands for the
-first sequence, and `1` stands for the second sequence if exists.
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`input_norm_mean`
-
-
-the mean value used in the input tensor normalization
-[1].
-
-
-
-`input_norm_std`
-
-
-the std value used in the input tensor normalizarion [1].
-
-
-
-`label_file_paths`
-
-
-paths to the label files [2] in the classification
-tensor. Pass in an empty list if the model does not have any label file.
-
-
-
-`score_calibration_md`
-
-
-information of the score calibration operation [3]
- in the classification tensor. Optional if the model does not use score
- calibration.
-[1]:
- https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-[3]:
- https://github.com/tensorflow/tflite-support/blob/5e0cdf5460788c481f5cd18aab8728ec36cf9733/tensorflow_lite_support/metadata/metadata_schema.fbs#L434
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`input_norm_mean`
-
-
-the mean value used in the input tensor normalization
-[1].
-
-
-
-`input_norm_std`
-
-
-the std value used in the input tensor normalizarion [1].
-
-
-
-`label_file_paths`
-
-
-paths to the label files [2] in the category tensor.
- Pass in an empty list If the model does not have any label file.
-[1]:
- https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L108
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Creates MetadataWriter based on general/input/outputs information.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`general_md`
-
-
-general information about the model.
-
-
-
-`input_md`
-
-
-input image tensor informaton.
-
-
-
-`output_md`
-
-
-output segmentation mask tensor informaton. This tensor is a
-multidimensional array of [1 x mask_height x mask_width x num_classes],
-where mask_width and mask_height are the dimensions of the segmentation
-masks produced by the model, and num_classes is the number of classes
-supported by the model.
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-
-
-
-
-## Modules
-
-[`writer_utils`](../../tflite_support/metadata_writers/writer_utils) module: Helper methods for writing metadata into TFLite models.
-
-## Classes
-
-[`class AssociatedFileMd`](../../tflite_support/metadata_writers/metadata_info/AssociatedFileMd): A container for common associated file metadata information.
-
-[`class BertInputTensorsMd`](../../tflite_support/metadata_writers/metadata_info/BertInputTensorsMd): A container for the input tensor metadata information of Bert models.
-
-[`class BertTokenizerMd`](../../tflite_support/metadata_writers/metadata_info/BertTokenizerMd): A container for the Bert tokenizer [1] metadata information.
-
-[`class CategoryTensorMd`](../../tflite_support/metadata_writers/metadata_info/CategoryTensorMd): A container for the category tensor metadata information.
-
-[`class ClassificationTensorMd`](../../tflite_support/metadata_writers/metadata_info/ClassificationTensorMd): A container for the classification tensor metadata information.
-
-[`class GeneralMd`](../../tflite_support/metadata_writers/metadata_info/GeneralMd): A container for common metadata information of a model.
-
-[`class InputAudioTensorMd`](../../tflite_support/metadata_writers/metadata_info/InputAudioTensorMd): A container for the input audio tensor metadata information.
-
-[`class InputImageTensorMd`](../../tflite_support/metadata_writers/metadata_info/InputImageTensorMd): A container for input image tensor metadata information.
-
-[`class InputTextTensorMd`](../../tflite_support/metadata_writers/metadata_info/InputTextTensorMd): A container for the input text tensor metadata information.
-
-[`class LabelFileMd`](../../tflite_support/metadata_writers/metadata_info/LabelFileMd): A container for label file metadata information.
-
-[`class RegexTokenizerMd`](../../tflite_support/metadata_writers/metadata_info/RegexTokenizerMd): A container for the Regex tokenizer [1] metadata information.
-
-[`class ScoreCalibrationMd`](../../tflite_support/metadata_writers/metadata_info/ScoreCalibrationMd): A container for score calibration [1] metadata information.
-
-[`class SentencePieceTokenizerMd`](../../tflite_support/metadata_writers/metadata_info/SentencePieceTokenizerMd): A container for the sentence piece tokenizer [1] metadata information.
-
-[`class TensorMd`](../../tflite_support/metadata_writers/metadata_info/TensorMd): A container for common tensor metadata information.
diff --git a/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/AssociatedFileMd.md b/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/AssociatedFileMd.md
deleted file mode 100644
index 7b2562555b..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/AssociatedFileMd.md
+++ /dev/null
@@ -1,122 +0,0 @@
-page_type: reference
-description: A container for common associated file metadata information.
-
-
-
-
-
-
-
-name of the ids tensor, which represents the tokenized ids of
-the input text.
-
-
-
-`mask_name`
-
-
-name of the mask tensor, which represents the mask with 1 for
-real tokens and 0 for padding tokens.
-
-
-
-`segment_name`
-
-
-name of the segment ids tensor, where `0` stands for the
-first sequence, and `1` stands for the second sequence if exists.
-
-
-
-`ids_md`
-
-
-input ids tensor informaton.
-
-
-
-`mask_md`
-
-
-input mask tensor informaton.
-
-
-
-`segment_ids_md`
-
-
-input segment tensor informaton.
-
-
-
-`tokenizer_md`
-
-
-information of the tokenizer used to process the input
-string, if any. Supported tokenziers are: `BertTokenizer` [1] and
- `SentencePieceTokenizer` [2]. If the tokenizer is `RegexTokenizer`
- [3], refer to nl_classifier.MetadataWriter.
-[1]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L436
-[2]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L473
-[3]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L475
-
-information of the label files [1] in the category tensor.
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L108
-
-information of the label files [1] in the classification
-tensor.
-
-
-
-`tensor_type`
-
-
-data type of the tensor.
-
-
-
-`score_calibration_md`
-
-
-information of the score calibration files operation
-[2] in the classification tensor.
-
-
-
-`tensor_name`
-
-
-name of the corresponding tensor [3] in the TFLite model. It
- is used to locate the corresponding classification tensor and decide the
- order of the tensor metadata [4] when populating model metadata.
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-[2]:
- https://github.com/tensorflow/tflite-support/blob/5e0cdf5460788c481f5cd18aab8728ec36cf9733/tensorflow_lite_support/metadata/metadata_schema.fbs#L434
-[3]:
- https://github.com/tensorflow/tensorflow/blob/cb67fef35567298b40ac166b0581cd8ad68e5a3a/tensorflow/lite/schema/schema.fbs#L1129-L1136
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b2a509716a2d71dfff706468680a729cc1604cff/tensorflow_lite_support/metadata/metadata_schema.fbs#L595-L612
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`label_files`
-
-
-information of the label files [1] in the classification
-tensor.
-
-
-
-`score_calibration_md`
-
-
-information of the score calibration operation [2] in
- the classification tensor.
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-[2]:
- https://github.com/tensorflow/tflite-support/blob/5e0cdf5460788c481f5cd18aab8728ec36cf9733/tensorflow_lite_support/metadata/metadata_schema.fbs#L434
-
-
-Creates the classification tensor metadata based on the information.
diff --git a/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/GeneralMd.md b/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/GeneralMd.md
deleted file mode 100644
index 3bbeaa58bf..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/metadata_writers/metadata_info/GeneralMd.md
+++ /dev/null
@@ -1,126 +0,0 @@
-page_type: reference
-description: A container for common metadata information of a model.
-
-
-
-
-
-
-
-the mean value used in tensor normalization [1].
-
-
-
-`norm_std`
-
-
-the std value used in the tensor normalization [1]. norm_mean
-and norm_std must have the same dimension.
-
-
-
-`color_space_type`
-
-
-the color space type of the input image [2].
-
-
-
-`tensor_type`
-
-
-data type of the tensor.
-[1]:
- https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters
-[2]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L172
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-if norm_mean and norm_std have different dimensions.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`norm_mean`
-
-
-the mean value used in tensor normalization [1].
-
-
-
-`norm_std`
-
-
-the std value used in the tensor normalization [1]. norm_mean and
-norm_std must have the same dimension.
-
-
-
-`color_space_type`
-
-
-the color space type of the input image [2].
-[1]:
- https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L172
-
-
-
-
-A container for the input text tensor metadata information.
-
-Inherits From: [`TensorMd`](../../../tflite_support/metadata_writers/metadata_info/TensorMd)
-
-
-
-information of the tokenizer in the input text tensor, if
-any. Only `RegexTokenizer` [1] is currenly supported. If the tokenizer
-is `BertTokenizer` [2] or `SentencePieceTokenizer` [3], refer to
-bert_nl_classifier.MetadataWriter.
-[1]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L475
-[2]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L436
-[3]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L473
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`tokenizer_md`
-
-
-information of the tokenizer in the input text tensor, if any.
-
-locale of the label file [1].
-[1]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L154
-
-information of the associated files in the tensor.
-
-
-
-`tensor_name`
-
-
-name of the corresponding tensor [1] in the TFLite model. It is
- used to locate the corresponding tensor and decide the order of the tensor
- metadata [2] when populating model metadata.
-[1]:
- https://github.com/tensorflow/tensorflow/blob/cb67fef35567298b40ac166b0581cd8ad68e5a3a/tensorflow/lite/schema/schema.fbs#L1129-L1136
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b2a509716a2d71dfff706468680a729cc1604cff/tensorflow_lite_support/metadata/metadata_schema.fbs#L595-L612
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`tokenizer_md`
-
-
-information of the tokenizer used to process the input
-string, if any. Only `RegexTokenizer` [1] is currently supported. If the
-tokenizer is `BertTokenizer` [2] or `SentencePieceTokenizer` [3], refer
-to bert_nl_classifier.MetadataWriter.
-[1]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L475
-[2]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L436
-[3]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L473
-
-
-
-`label_file_paths`
-
-
-paths to the label files [4] in the classification
-tensor. Pass in an empty list if the model does not have any label
-file.
-[4]:
-https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L95
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-Creates mandatory metadata for TFLite Support inference.
-
-The parameters required in this method are mandatory when using TFLite
-Support features, such as Task library and Codegen tool (Android Studio ML
-Binding). Other metadata fields will be set to default. If other fields need
-to be filled, use the method `create_from_metadata_info` to edit them.
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`input_norm_mean`
-
-
-the mean value used in the input tensor normalization
-[1].
-
-
-
-`input_norm_std`
-
-
-the std value used in the input tensor normalizarion [1].
-
-
-
-`label_file_paths`
-
-
-paths to the label files [2] in the category tensor.
-Pass in an empty list, If the model does not have any label file.
-
-
-
-`score_calibration_md`
-
-
-information of the score calibration operation [3]
- in the classification tensor. Optional if the model does not use score
- calibration.
-[1]:
- https://www.tensorflow.org/lite/convert/metadata#normalization_and_quantization_parameters
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L108
-[3]:
- https://github.com/tensorflow/tflite-support/blob/5e0cdf5460788c481f5cd18aab8728ec36cf9733/tensorflow_lite_support/metadata/metadata_schema.fbs#L434
-
-
-Creates MetadataWriter based on the metadata Flatbuffers Python Objects.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`model_metadata`
-
-
-general model metadata [1]. The subgraph_metadata will be
-refreshed with input_metadata and output_metadata.
-
-
-
-`input_metadata`
-
-
-a list of metadata of the input tensors [2].
-
-
-
-`output_metadata`
-
-
-a list of metadata of the output tensors [3].
-
-
-
-`associated_files`
-
-
-path to the associated files to be populated.
-
-
-
-`input_process_units`
-
-
-a lits of metadata of the input process units [4].
-
-
-
-`output_process_units`
-
-
-a lits of metadata of the output process units [5].
-[1]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L640-L681
-[2]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L590
-[3]:
- https://github.com/tensorflow/tflite-support/blob/b80289c4cd1224d0e1836c7654e82f070f9eefaa/tensorflow_lite_support/metadata/metadata_schema.fbs#L599
-[4]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L646
-[5]:
- https://github.com/tensorflow/tflite-support/blob/b5cc57c74f7990d8bc055795dfe8d50267064a57/tensorflow_lite_support/metadata/metadata_schema.fbs#L650
-
-
-Creates MetadataWriter based on general/input/outputs information.
-
-
-
-
-
-
Args
-
-
-
-`model_buffer`
-
-
-valid buffer of the model file.
-
-
-
-`general_md`
-
-
-general information about the model.
-
-
-
-`input_md`
-
-
-input image tensor informaton.
-
-
-
-`output_location_md`
-
-
-output location tensor informaton. The location tensor
-is a multidimensional array of [N][4] floating point values between 0
-and 1, the inner arrays representing bounding boxes in the form [top,
-left, bottom, right].
-
-
-
-`output_category_md`
-
-
-output category tensor information. The category
-tensor is an array of N integers (output as floating point values) each
-indicating the index of a class label from the labels file.
-
-
-
-`output_score_md`
-
-
-output score tensor information. The score tensor is an
-array of N floating point values between 0 and 1 representing
-probability that a class was detected. Use ClassificationTensorMd to
-calibrate score.
-
-
-
-`output_number_md`
-
-
-output number of detections tensor information. This
-tensor is an integer value of N.
-
-
-Gets the generated JSON metadata string before populated into model.
-
-This method returns the metadata buffer before populated into the model.
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_populated_metadata_json() if you want to get the
-final metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string before populated into model.
-
-
-Gets the generated JSON metadata string after populated into model.
-
-More fields could be filled by MetadataPopulator, such as
-min_parser_version. Use get_metadata_json() if you want to get the
-original metadata string.
-
-
-
-
-
Returns
-
-
-The generated JSON metadata string after populated into model.
-
-
-
-
-
-
-## Functions
-
-[`compute_flat_size(...)`](../../tflite_support/metadata_writers/writer_utils/compute_flat_size): Computes the flat size (number of elements) of tensor shape.
-
-[`get_input_tensor_names(...)`](../../tflite_support/metadata_writers/writer_utils/get_input_tensor_names): Gets a list of the input tensor names.
-
-[`get_input_tensor_shape(...)`](../../tflite_support/metadata_writers/writer_utils/get_input_tensor_shape): Gets the shape of the specified input tensor.
-
-[`get_input_tensor_types(...)`](../../tflite_support/metadata_writers/writer_utils/get_input_tensor_types): Gets a list of the input tensor types.
-
-[`get_output_tensor_names(...)`](../../tflite_support/metadata_writers/writer_utils/get_output_tensor_names): Gets a list of the output tensor names.
-
-[`get_output_tensor_types(...)`](../../tflite_support/metadata_writers/writer_utils/get_output_tensor_types): Gets a list of the output tensor types.
-
-[`get_tokenizer_associated_files(...)`](../../tflite_support/metadata_writers/writer_utils/get_tokenizer_associated_files): Gets a list of associated files packed in the tokenzier_options.
-
-[`load_file(...)`](../../tflite_support/metadata_writers/writer_utils/load_file): Loads file from the file path.
-
-[`save_file(...)`](../../tflite_support/metadata_writers/writer_utils/save_file): Loads file from the file path.
diff --git a/site/en/lite/api_docs/python/tflite_support/metadata_writers/writer_utils/compute_flat_size.md b/site/en/lite/api_docs/python/tflite_support/metadata_writers/writer_utils/compute_flat_size.md
deleted file mode 100644
index a9b8b2e45d..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/metadata_writers/writer_utils/compute_flat_size.md
+++ /dev/null
@@ -1,78 +0,0 @@
-page_type: reference
-description: Computes the flat size (number of elements) of tensor shape.
-
-
-
-
-
-
-
-
-
-
-The TensorFlow Lite Task Library.
-
-
-TensorFlow Lite Task Library contains a set of powerful and easy-to-use
-task-specific libraries for app developers to create ML experiences with
-TensorFlow Lite. It provides optimized out-of-box model interfaces for popular
-machine learning tasks, such as image and text classification. The model
-interfaces are specifically designed for each task to achieve the best
-performance and usability.
-
-Read more in the [Task Library Guide](
-https://tensorflow.org/lite/inference_with_metadata/task_library/overview).
-
-## Modules
-
-[`audio`](../tflite_support/task/audio) module: TensorFlow Lite Task Library Audio APIs.
-
-[`core`](../tflite_support/task/core) module: TensorFlow Lite Task Library's core module.
-
-[`processor`](../tflite_support/task/processor) module: TensorFlow Lite Task Library's processor module.
-
-[`text`](../tflite_support/task/text) module: TensorFlow Lite Task Library Text APIs.
-
-[`vision`](../tflite_support/task/vision) module: TensorFlow Lite Task Library Vision APIs.
diff --git a/site/en/lite/api_docs/python/tflite_support/task/audio.md b/site/en/lite/api_docs/python/tflite_support/task/audio.md
deleted file mode 100644
index 0506d322d9..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/audio.md
+++ /dev/null
@@ -1,48 +0,0 @@
-page_type: reference
-description: TensorFlow Lite Task Library Audio APIs.
-
-
-
-
-
-
-
-
-
-
-TensorFlow Lite Task Library Audio APIs.
-
-
-This module provides interface to run TensorFlow Lite audio models.
-
-## Classes
-
-[`class AudioClassifier`](../../tflite_support/task/audio/AudioClassifier): Class that performs classification on audio.
-
-[`class AudioClassifierOptions`](../../tflite_support/task/audio/AudioClassifierOptions): Options for the audio classifier task.
-
-[`class AudioEmbedder`](../../tflite_support/task/audio/AudioEmbedder): Class that performs dense feature vector extraction on audio.
-
-[`class AudioEmbedderOptions`](../../tflite_support/task/audio/AudioEmbedderOptions): Options for the audio embedder task.
-
-[`class AudioFormat`](../../tflite_support/task/audio/AudioFormat)
-
-[`class AudioRecord`](../../tflite_support/task/audio/AudioRecord): A class to record audio in a streaming basis.
-
-[`class TensorAudio`](../../tflite_support/task/audio/TensorAudio): A wrapper class to store the input audio.
diff --git a/site/en/lite/api_docs/python/tflite_support/task/audio/AudioClassifier.md b/site/en/lite/api_docs/python/tflite_support/task/audio/AudioClassifier.md
deleted file mode 100644
index 0c5159acd2..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/audio/AudioClassifier.md
+++ /dev/null
@@ -1,331 +0,0 @@
-page_type: reference
-description: Class that performs classification on audio.
-
-
-
-
-
-
-
-
-Creates `TensorAudio` object from the WAV file.
-
-
-
-
-
-
Args
-
-
-
-`file_name`
-
-
-WAV file name.
-
-
-
-`sample_count`
-
-
-The number of samples to read from the WAV file. This value
-should match with the input size of the TensorFlow Lite audio model that
-will consume the created TensorAudio object. If the WAV file contains
-more samples than sample_count, only the samples at the beginning of the
-WAV file will be loaded.
-
-
-
-`offset`
-
-
-An optional offset for allowing the user to skip a certain number
-samples at the beginning.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-`TensorAudio` object.
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-If an input parameter, such as the audio file, is invalid.
-
-
-
-
-
-
-Represents external files used by the Task APIs (e.g. TF Lite FlatBuffer or
-plain-text labels file). The files can be specified by one of the following
-two ways:
-
-(1) file contents loaded in `file_content`.
-(2) file path in `file_name`.
-
-If more than one field of these fields is provided, they are used in this
-precedence order.
-
-
-
-
-
-
-
Attributes
-
-
-
-`file_name`
-
-
-Path to the index.
-
-
-
-`file_content`
-
-
-The index file contents as bytes.
-
-
-
-`num_threads`
-
-
-Number of thread, the default value is -1 which means
-Interpreter will decide what is the most appropriate `num_threads`.
-
-
-
-`use_coral`
-
-
-If true, inference will be delegated to a connected Coral Edge
-TPU device.
-
-
-
-
-TensorFlow Lite Task Library's processor module.
-
-
-This module contains classes related to the pre-processing and post-processing
-steps of the Task Library.
-
-## Classes
-
-[`class BertCluAnnotationOptions`](../../tflite_support/task/processor/BertCluAnnotationOptions): Options for Bert CLU Annotator processor.
-
-[`class BoundingBox`](../../tflite_support/task/processor/BoundingBox): An integer bounding box, axis aligned.
-
-[`class CategoricalSlot`](../../tflite_support/task/processor/CategoricalSlot): Represents a categorical slot whose values are within a finite set.
-
-[`class Category`](../../tflite_support/task/processor/Category): A classification category.
-
-[`class ClassificationOptions`](../../tflite_support/task/processor/ClassificationOptions): Options for classification processor.
-
-[`class ClassificationResult`](../../tflite_support/task/processor/ClassificationResult): Contains one set of results per classifier head.
-
-[`class Classifications`](../../tflite_support/task/processor/Classifications): List of predicted classes (aka labels) for a given classifier head.
-
-[`class CluRequest`](../../tflite_support/task/processor/CluRequest): The input to CLU (Conversational Language Understanding).
-
-[`class CluResponse`](../../tflite_support/task/processor/CluResponse): The output of CLU.
-
-[`class ColoredLabel`](../../tflite_support/task/processor/ColoredLabel): Defines a label associated with an RGB color, for display purposes.
-
-[`class ConfidenceMask`](../../tflite_support/task/processor/ConfidenceMask): 2D-array representing the confidence mask in row major order.
-
-[`class Detection`](../../tflite_support/task/processor/Detection): Represents one detected object in the object detector's results.
-
-[`class DetectionOptions`](../../tflite_support/task/processor/DetectionOptions): Options for object detection processor.
-
-[`class DetectionResult`](../../tflite_support/task/processor/DetectionResult): Represents the list of detected objects.
-
-[`class Embedding`](../../tflite_support/task/processor/Embedding): Result produced by one of the embedder model output layers.
-
-[`class EmbeddingOptions`](../../tflite_support/task/processor/EmbeddingOptions): Options for embedding processor.
-
-[`class EmbeddingResult`](../../tflite_support/task/processor/EmbeddingResult): Embeddings produced by the Embedder.
-
-[`class FeatureVector`](../../tflite_support/task/processor/FeatureVector): A dense feature vector.
-
-[`class Mention`](../../tflite_support/task/processor/Mention): A single mention result.
-
-[`class MentionedSlot`](../../tflite_support/task/processor/MentionedSlot): Non-categorical slot whose values are open text extracted from the input text.
-
-[`class NearestNeighbor`](../../tflite_support/task/processor/NearestNeighbor): A single nearest neighbor.
-
-[`class OutputType`](../../tflite_support/task/processor/OutputType): An enumeration.
-
-[`class Pos`](../../tflite_support/task/processor/Pos): Position information of the answer relative to context.
-
-[`class QaAnswer`](../../tflite_support/task/processor/QaAnswer): Represents the Answer to BertQuestionAnswerer.
-
-[`class QuestionAnswererResult`](../../tflite_support/task/processor/QuestionAnswererResult): The list of probable answers generated by BertQuestionAnswerer.
-
-[`class SearchOptions`](../../tflite_support/task/processor/SearchOptions): Options for search processor.
-
-[`class SearchResult`](../../tflite_support/task/processor/SearchResult): Results from a search as a list of nearest neigbors.
-
-[`class Segmentation`](../../tflite_support/task/processor/Segmentation): Represents one Segmentation object in the image segmenter's results.
-
-[`class SegmentationOptions`](../../tflite_support/task/processor/SegmentationOptions): Options for segmentation processor.
-
-[`class SegmentationResult`](../../tflite_support/task/processor/SegmentationResult): Results of performing image segmentation.
diff --git a/site/en/lite/api_docs/python/tflite_support/task/processor/BertCluAnnotationOptions.md b/site/en/lite/api_docs/python/tflite_support/task/processor/BertCluAnnotationOptions.md
deleted file mode 100644
index e28e4ea598..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/processor/BertCluAnnotationOptions.md
+++ /dev/null
@@ -1,192 +0,0 @@
-page_type: reference
-description: Options for Bert CLU Annotator processor.
-
-
-
-
-
-
-
-
-
-
-
-
-Category is a util class, contains a label, its display name, a float
-value as score, and the index of the label in the corresponding label file.
-Typically it's used as the result of classification tasks.
-
-
-
-
-
-
-
Attributes
-
-
-
-`index`
-
-
-The index of the label in the corresponding label file.
-
-
-
-`score`
-
-
-The probability score of this label category.
-
-
-
-`display_name`
-
-
-The display name of the label, which may be translated for
-different locales. For example, a label, "apple", may be translated into
-Spanish for display purpose, so that the `display_name` is "manzana".
-
-The locale to use for display names specified through
-the TFLite Model Metadata.
-
-
-
-`max_results`
-
-
-The maximum number of top-scored classification results to
-return.
-
-
-
-`score_threshold`
-
-
-Overrides the ones provided in the model metadata. Results
-below this value are rejected.
-
-
-
-`category_name_allowlist`
-
-
-If non-empty, classifications whose class name is
-not in this set will be filtered out. Duplicate or unknown class names are
-ignored. Mutually exclusive with `category_name_denylist`.
-
-
-
-`category_name_denylist`
-
-
-If non-empty, classifications whose class name is in
-this set will be filtered out. Duplicate or unknown class names are
-ignored. Mutually exclusive with `category_name_allowlist`.
-
-
-
-
-
-
-For each pixel, the value indicates the prediction confidence usually
-in the [0, 1] range where higher values represent a stronger confidence.
-Ultimately this is model specific, and other range of values might be used.
-
-
-
-
-
-
-
Attributes
-
-
-
-`value`
-
-
-A NumPy 2D-array indicating the prediction confidence values usually
-in the range [0, 1].
-
-The locale to use for display names specified through
-the TFLite Model Metadata.
-
-
-
-`max_results`
-
-
-The maximum number of top-scored classification results to
-return.
-
-
-
-`score_threshold`
-
-
-Overrides the ones provided in the model metadata. Results
-below this value are rejected.
-
-
-
-`category_name_allowlist`
-
-
-If non-empty, classifications whose class name is
-not in this set will be filtered out. Duplicate or unknown class names are
-ignored. Mutually exclusive with `category_name_denylist`.
-
-
-
-`category_name_denylist`
-
-
-If non-empty, classifications whose class name is in
-this set will be filtered out. Duplicate or unknown class names are
-ignored. Mutually exclusive with `category_name_allowlist`.
-
-
-Checks if this object is equal to the given object.
-
-
-
-
-
-
Args
-
-
-
-`other`
-
-
-The object to be compared with.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-True if the objects are equal.
-
-
-
-
diff --git a/site/en/lite/api_docs/python/tflite_support/task/processor/Embedding.md b/site/en/lite/api_docs/python/tflite_support/task/processor/Embedding.md
deleted file mode 100644
index c5f46137dc..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/processor/Embedding.md
+++ /dev/null
@@ -1,114 +0,0 @@
-page_type: reference
-description: Result produced by one of the embedder model output layers.
-
-
-
-
-
-
-
-Whether to normalize the returned feature vector with L2 norm.
-Use this option only if the model does not already contain a native
-L2_NORMALIZATION TF Lite Op. In most cases, this is already the case and
-L2 norm is thus achieved through TF Lite inference.
-
-
-
-`quantize`
-
-
-Whether the returned embedding should be quantized to bytes via
-scalar quantization. Embeddings are implicitly assumed to be unit-norm and
-therefore any dimension is guaranteed to have a value in [-1.0, 1.0]. Use
-the l2_normalize option if this is not the case.
-
-The embeddings produced by each of the model output layers.
-Except in advanced cases, the embedding model has a single output layer,
-and this list is thus made of a single element feature vector.
-
-
-
-
-
-
-Only one of the two fields is ever present.
-Feature vectors are assumed to be one-dimensional and L2-normalized.
-
-
-
-
-
-
-
Attributes
-
-
-
-`value`
-
-
-A NumPy array indidcating the raw output of the embedding layer. The
-datatype of elements in the array can be either float or uint8 if
-`quantize` is set to True in `EmbeddingOptions`.
-
-
-
-
-
-
-The index file to search into. Mandatory only if the index is not attached
-to the output tensor metadata as an AssociatedFile with type SCANN_INDEX_FILE.
-The index file can be specified by one of the following two ways:
-
-(1) file contents loaded in `index_file_content`.
-(2) file path in `index_file_name`.
-
-If more than one field of these fields is provided, they are used in this
-precedence order.
-
-
-
-
-
-
-
Attributes
-
-
-
-`index_file_name`
-
-
-Path to the index.
-
-
-
-`index_file_content`
-
-
-The index file contents as bytes.
-
-
-
-`max_results`
-
-
-Maximum number of nearest neighbor results to return.
-
-
-
-
-
-
-Note that at the time, a single `Segmentation` element is expected to be
-returned; the field is made repeated for later extension to e.g. instance
-segmentation models, which may return one segmentation per object.
-
-
-
-
-
-
-
-
-TensorFlow Lite Task Library Text APIs.
-
-
-This module provides interface to run TensorFlow Lite natural language
-processing models.
-
-## Classes
-
-[`class BertCluAnnotator`](../../tflite_support/task/text/BertCluAnnotator): Class that performs Bert CLU Annotation on text.
-
-[`class BertCluAnnotatorOptions`](../../tflite_support/task/text/BertCluAnnotatorOptions): Options for the Bert CLU Annotator task.
-
-[`class BertNLClassifier`](../../tflite_support/task/text/BertNLClassifier): Class that performs Bert NL classification on text.
-
-[`class BertNLClassifierOptions`](../../tflite_support/task/text/BertNLClassifierOptions): Options for the Bert NL classifier task.
-
-[`class BertQuestionAnswerer`](../../tflite_support/task/text/BertQuestionAnswerer): Class that performs Bert question answering on text.
-
-[`class BertQuestionAnswererOptions`](../../tflite_support/task/text/BertQuestionAnswererOptions): Options for the Bert question answerer task.
-
-[`class NLClassifier`](../../tflite_support/task/text/NLClassifier): Class that performs NL classification on text.
-
-[`class NLClassifierOptions`](../../tflite_support/task/text/NLClassifierOptions): Options for the NL classifier task.
-
-[`class TextEmbedder`](../../tflite_support/task/text/TextEmbedder): Class that performs dense feature vector extraction on text.
-
-[`class TextEmbedderOptions`](../../tflite_support/task/text/TextEmbedderOptions): Options for the text embedder task.
-
-[`class TextSearcher`](../../tflite_support/task/text/TextSearcher): Class to performs text search.
-
-[`class TextSearcherOptions`](../../tflite_support/task/text/TextSearcherOptions): Options for the text search task.
diff --git a/site/en/lite/api_docs/python/tflite_support/task/text/BertCluAnnotator.md b/site/en/lite/api_docs/python/tflite_support/task/text/BertCluAnnotator.md
deleted file mode 100644
index 086fd5d6a3..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/text/BertCluAnnotator.md
+++ /dev/null
@@ -1,272 +0,0 @@
-page_type: reference
-description: Class that performs Bert CLU Annotation on text.
-
-
-
-
-
-
-
-
-Creates the `NLClassifier` object from NL classifier options.
-
-
-
-
-
-
Args
-
-
-
-`options`
-
-
-Options for the NL classifier task.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-`NLClassifier` object that's created from `options`.
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-
-`ValueError`
-
-
-If failed to create `NLClassifier` object from
-`NLClassifierOptions` such as missing the model or if any of the
-classification options is invalid.
-
-
-
-
-
-
-It works by performing embedding extraction on text, followed by
-nearest-neighbor search in an index of embeddings through ScaNN.
-
-
-
-
-
-
-Search for text with similar semantic meaning.
-
-This method performs actual feature extraction on the provided text input,
-followed by nearest-neighbor search in the index.
-
-
-
-
-
-
-TensorFlow Lite Task Library Vision APIs.
-
-
-This module provides interface to run TensorFlow Lite computer vision models.
-
-## Classes
-
-[`class ImageClassifier`](../../tflite_support/task/vision/ImageClassifier): Class that performs classification on images.
-
-[`class ImageClassifierOptions`](../../tflite_support/task/vision/ImageClassifierOptions): Options for the image classifier task.
-
-[`class ImageEmbedder`](../../tflite_support/task/vision/ImageEmbedder): Class that performs dense feature vector extraction on images.
-
-[`class ImageEmbedderOptions`](../../tflite_support/task/vision/ImageEmbedderOptions): Options for the image embedder task.
-
-[`class ImageSearcher`](../../tflite_support/task/vision/ImageSearcher): Class to performs image search.
-
-[`class ImageSearcherOptions`](../../tflite_support/task/vision/ImageSearcherOptions): Options for the image search task.
-
-[`class ImageSegmenter`](../../tflite_support/task/vision/ImageSegmenter): Class that performs segmentation on images.
-
-[`class ImageSegmenterOptions`](../../tflite_support/task/vision/ImageSegmenterOptions): Options for the image segmenter task.
-
-[`class ObjectDetector`](../../tflite_support/task/vision/ObjectDetector): Class that performs object detection on images.
-
-[`class ObjectDetectorOptions`](../../tflite_support/task/vision/ObjectDetectorOptions): Options for the object detector task.
-
-[`class TensorImage`](../../tflite_support/task/vision/TensorImage): Wrapper class for the Image object.
diff --git a/site/en/lite/api_docs/python/tflite_support/task/vision/ImageClassifier.md b/site/en/lite/api_docs/python/tflite_support/task/vision/ImageClassifier.md
deleted file mode 100644
index fdc71b4afe..0000000000
--- a/site/en/lite/api_docs/python/tflite_support/task/vision/ImageClassifier.md
+++ /dev/null
@@ -1,283 +0,0 @@
-page_type: reference
-description: Class that performs classification on images.
-
-
-
-
-
-
-
-
-Performs classification on the provided TensorImage.
-
-
-
-
-
-
Args
-
-
-
-`image`
-
-
-Tensor image, used to extract the feature vectors.
-
-
-
-`bounding_box`
-
-
-Bounding box, optional. If set, performed feature vector
-extraction only on the provided region of interest. Note that the region
-of interest is not clamped, so this method will fail if the region is
-out of bounds of the input image.
-
-
-Performs actual feature vector extraction on the provided TensorImage.
-
-
-
-
-
-
Args
-
-
-
-`image`
-
-
-Tensor image, used to extract the feature vectors.
-
-
-
-`bounding_box`
-
-
-Bounding box, optional. If set, performed feature vector
-extraction only on the provided region of interest. Note that the region
-of interest is not clamped, so this method will fail if the region is
-out of bounds of the input image.
-
-
-Gets the embedding in the embedding result by `output_index`.
-
-
-
-
-
-
Args
-
-
-
-`result`
-
-
-embedding result.
-
-
-
-`output_index`
-
-
-output index of the output layer.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-The Embedding output by the output_index'th layer. In (the most common)
-case where a single embedding is produced, you can just call
-get_feature_vector_by_index(result, 0).
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-ValueError if the output index is out of bound.
-
-
-
-
-
-
-It works by performing embedding extraction on images, followed by
-nearest-neighbor search in an index of embeddings through ScaNN.
-
-
-
-
-
-
-Search for image with similar semantic meaning.
-
-This method performs actual feature extraction on the provided image input,
-followed by nearest-neighbor search in the index.
-
-
-
-
-
Args
-
-
-
-`image`
-
-
-Tensor image, used to extract the feature vectors.
-
-
-
-`bounding_box`
-
-
-Bounding box, optional. If set, performed feature vector
-extraction only on the provided region of interest. Note that the region
-of interest is not clamped, so this method will fail if the region is
-out of bounds of the input image.
-
-boolean, whether `image_data` is loaded from
-numpy array. if False, it means that `image_data` is loaded from
-stbi_load** function in C++ and need to free the storage of ImageData in
-the destructor.
-
-
-
-
-
-
-
-
-
-
-
-
Attributes
-
-
-
-`buffer`
-
-
-Gets the numpy array that represents `self.image_data`.
-
-
-Creates `TensorImage` object from the numpy array.
-
-
-
-
-
-
Args
-
-
-
-`array`
-
-
-numpy array with dtype=uint8. Its shape should be either (h, w, 3)
-or (1, h, w, 3) for RGB images, either (h, w) or (1, h, w) for GRAYSCALE
-images and either (h, w, 4) or (1, h, w, 4) for RGBA images.
-
-
-
-
-
-
-
-
-
-
Returns
-
-
-`TensorImage` object.
-
-
-
-
-
-
-
-
-
-
-
Raises
-
-
-ValueError if the dytype of the numpy array is not `uint8` or the
-dimention is not the valid dimention.
-
Creates a new instance configured with the given options. Returns nil if the underlying
-Core ML delegate could not be created because Options.enabledDevices was set to
-neuralEngine but the device does not have the Neural Engine.
A type indicating which devices the Core ML delegate should be enabled for. The default
-value is .neuralEngine indicating that the delegate is enabled for Neural Engine devices
-only.
The maximum number of Core ML delegate partitions created. Each graph corresponds to one
-delegated node subset in the TFLite model. The default value is 0 indicating that all
-possible partitions are delegated.
- An error if the index is invalid, tensors haven’t been allocated, or interpreter
-has not been invoked for models that dynamically compute output tensors based on the
-values of its input tensors.
-
-
Resizes the input Tensor at the given index to the specified Tensor.Shape.
-
-
Note
- After resizing an input tensor, the client must explicitly call
-allocateTensors() before attempting to access the resized tensor data or invoking the
-interpreter to perform inference.
-
-
-
Throws
- An error if the input tensor at the given index could not be resized.
-
-
The maximum number of CPU threads that the interpreter should run on. The default is nil
-indicating that the Interpreter will decide the number of threads to use.
Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is
-enabled.
-
-
Experiment
-
Enabling this flag will enable use of a new, highly optimized set of CPU kernels provided
-via the XNNPACK delegate. Currently, this is restricted to a subset of floating point
-operations. Eventually, we plan to enable this by default, as it can provide significant
-performance benefits for many classes of floating point models. See
-https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/README.md
-for more details.
-
-
-
Important
-
Things to keep in mind when enabling this flag:
-
-
-
Startup time and resize time may increase.
-
Baseline memory consumption may increase.
-
Compatibility with other delegates (e.g., GPU) has not been fully validated.
-
Quantized models will not see any benefit.
-
-
-
-
Warning
-
This is an experimental interface that is subject to change.
Parameters that determine the mapping of quantized values to real values. Quantized values can
-be mapped to float values using the following conversion:
-realValue = scale * (quantizedValue - zeroPoint).
Parameters that determine the mapping of quantized values to real values. Quantized values can
-be mapped to float values using the following conversion:
-realValue = scale * (quantizedValue - zeroPoint).
A type alias for Interpreter.Options to support backwards compatibility with the deprecated"},"Structs/Tensor/Shape.html#/s:19TensorFlowLiteSwift0A0V5ShapeV4rankSivp":{"name":"rank","abstract":"
A string describing the semantic versioning information for the runtime. Is an empty string if","parent_name":"Runtime"},"Enums/ThreadWaitType.html#/s:19TensorFlowLiteSwift14ThreadWaitTypeO4noneyA2CmF":{"name":"none","abstract":"
The thread does not wait for the work to complete. Useful when the output of the work is used","parent_name":"ThreadWaitType"},"Enums/ThreadWaitType.html#/s:19TensorFlowLiteSwift14ThreadWaitTypeO7passiveyA2CmF":{"name":"passive","abstract":"
The thread waits for the work to complete with minimal latency, which may require additional","parent_name":"ThreadWaitType"},"Enums/ThreadWaitType.html#/s:19TensorFlowLiteSwift14ThreadWaitTypeO10aggressiveyA2CmF":{"name":"aggressive","abstract":"
The thread waits for the work while trying to prevent the GPU from going into sleep mode.
Indicates whether the GPU delegate allows precision loss, such as allowing Float16","parent_name":"Options"},"Classes/MetalDelegate/Options.html#/s:19TensorFlowLiteSwift13MetalDelegateC7OptionsV19allowsPrecisionLossSbvp":{"name":"allowsPrecisionLoss","abstract":"
A type indicating how the current thread should wait for work on the GPU to complete. The","parent_name":"Options"},"Classes/MetalDelegate/Options.html#/s:19TensorFlowLiteSwift13MetalDelegateC7OptionsV21isQuantizationEnabledSbvp":{"name":"isQuantizationEnabled","abstract":"
Indicates whether the GPU delegate allows execution of an 8-bit quantized model. The default","parent_name":"Options"},"Classes/MetalDelegate/Options.html#/s:19TensorFlowLiteSwift13MetalDelegateC7OptionsVAEycfc":{"name":"init()","abstract":"
The maximum number of CPU threads that the interpreter should run on. The default is nil","parent_name":"Options"},"Classes/Interpreter/Options.html#/s:19TensorFlowLiteSwift11InterpreterC7OptionsV16isXNNPackEnabledSbvp":{"name":"isXNNPackEnabled","abstract":"
Indicates whether an optimized set of floating point CPU kernels, provided by XNNPACK, is","parent_name":"Options"},"Classes/Interpreter/Options.html#/s:19TensorFlowLiteSwift11InterpreterC7OptionsVAEycfc":{"name":"init()","abstract":"
A type indicating which devices the Core ML delegate should be enabled for. The default","parent_name":"Options"},"Classes/CoreMLDelegate/Options.html#/s:19TensorFlowLiteSwift14CoreMLDelegateC7OptionsV13coreMLVersionSivp":{"name":"coreMLVersion","abstract":"
Target Core ML version for the model conversion. When it’s not set, Core ML version will","parent_name":"Options"},"Classes/CoreMLDelegate/Options.html#/s:19TensorFlowLiteSwift14CoreMLDelegateC7OptionsV22maxDelegatedPartitionsSivp":{"name":"maxDelegatedPartitions","abstract":"
The maximum number of Core ML delegate partitions created. Each graph corresponds to one","parent_name":"Options"},"Classes/CoreMLDelegate/Options.html#/s:19TensorFlowLiteSwift14CoreMLDelegateC7OptionsV20minNodesPerPartitionSivp":{"name":"minNodesPerPartition","abstract":"
The minimum number of nodes per partition to be delegated by the Core ML delegate. The","parent_name":"Options"},"Classes/CoreMLDelegate/Options.html#/s:19TensorFlowLiteSwift14CoreMLDelegateC7OptionsVAEycfc":{"name":"init()","abstract":"
Creates a new instance configured with the given options. Returns nil if the underlying","parent_name":"CoreMLDelegate"},"Classes/CoreMLDelegate/EnabledDevices.html":{"name":"EnabledDevices","abstract":"
A type indicating which devices the Core ML delegate should be enabled for.