tensorflow/lite/g3doc/performance/coreml_delegate.md
The TensorFlow Lite Core ML delegate enables running TensorFlow Lite models on Core ML framework, which results in faster model inference on iOS devices.
Note: This delegate is in experimental (beta) phase. It is available from TensorFlow Lite 2.4.0 and latest nightly releases.
Note: Core ML delegate supports Core ML version 2 and later.
Supported iOS versions and devices:
Supported models
The Core ML delegate currently supports float (FP32 and FP16) models.
The Core ML delegate is already included in nightly release of TensorFlow lite
CocoaPods. To use Core ML delegate, change your TensorFlow lite pod to include
subspec CoreML in your Podfile.
Note: If you want to use C API instead of Objective-C API, you can include
TensorFlowLiteC/CoreML pod to do so.
target 'YourProjectName'
pod 'TensorFlowLiteSwift/CoreML', '~> 2.4.0' # Or TensorFlowLiteObjC/CoreML
OR
# Particularily useful when you also want to include 'Metal' subspec.
target 'YourProjectName'
pod 'TensorFlowLiteSwift', '~> 2.4.0', :subspecs => ['CoreML']
Note: Core ML delegate can also use C API for Objective-C code. Prior to TensorFlow Lite 2.4.0 release, this was the only option.
<div> <devsite-selector> <section> <h3>Swift</h3> <p><pre class="prettyprint lang-swift"> let coreMLDelegate = CoreMLDelegate() var interpreter: Interpreter// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
interpreter = try Interpreter(modelPath: modelPath,
delegates: [coreMLDelegate!])
} else {
interpreter = try Interpreter(modelPath: modelPath)
}
</pre></p> </section> <section> <h3>Objective-C</h3> <p><pre class="prettyprint lang-objc">
// Import module when using CocoaPods with module support
@import TFLTensorFlowLite;
// Or import following headers manually
# import "tensorflow/lite/objc/apis/TFLCoreMLDelegate.h"
# import "tensorflow/lite/objc/apis/TFLTensorFlowLite.h"
// Initialize Core ML delegate
TFLCoreMLDelegate* coreMLDelegate = [[TFLCoreMLDelegate alloc] init];
// Initialize interpreter with model path and Core ML delegate
TFLInterpreterOptions* options = [[TFLInterpreterOptions alloc] init];
NSError* error = nil;
TFLInterpreter* interpreter = [[TFLInterpreter alloc]
initWithModelPath:modelPath
options:options
delegates:@[ coreMLDelegate ]
error:&error];
if (error != nil) { /* Error handling... */ }
if (![interpreter allocateTensorsWithError:&error]) { /* Error handling... */ }
if (error != nil) { /* Error handling... */ }
// Run inference ...
</pre></p> </section> <section> <h3>C (Until 2.3.0)</h3> <p><pre class="prettyprint lang-c"> #include "tensorflow/lite/delegates/coreml/coreml_delegate.h"
// Initialize interpreter with model
TfLiteModel* model = TfLiteModelCreateFromFile(model_path);
// Initialize interpreter with Core ML delegate
TfLiteInterpreterOptions* options = TfLiteInterpreterOptionsCreate();
TfLiteDelegate* delegate = TfLiteCoreMlDelegateCreate(NULL); // default config
TfLiteInterpreterOptionsAddDelegate(options, delegate);
TfLiteInterpreterOptionsDelete(options);
TfLiteInterpreter* interpreter = TfLiteInterpreterCreate(model, options);
TfLiteInterpreterAllocateTensors(interpreter);
// Run inference ...
/* ... */
// Dispose resources when it is no longer used.
// Add following code to the section where you dispose of the delegate
// (e.g. `dealloc` of class).
TfLiteInterpreterDelete(interpreter);
TfLiteCoreMlDelegateDelete(delegate);
TfLiteModelDelete(model);
</pre></p>
</section>
By default, Core ML delegate will only be created if the device has Neural
Engine, and will return null if the delegate is not created. If you want to
run Core ML delegate on other environments (for example, simulator), pass .all
as an option while creating delegate in Swift. On C++ (and Objective-C), you can
pass TfLiteCoreMlDelegateAllDevices. Following example shows how to do this:
// Initialize interpreter with delegate
</pre></p> </section> <section> <h3>C</h3> <p><pre class="prettyprint lang-c"> TfLiteCoreMlDelegateOptions options; options.enabled_devices = TfLiteCoreMlDelegateAllDevices; TfLiteDelegate* delegate = TfLiteCoreMlDelegateCreate(&options); // Initialize interpreter with delegate </pre></p> </section> </devsite-selector>
</div>When the Core ML delegate is not created, alternatively you can still use Metal delegate to get performance benefits. Following example shows how to do this:
<div> <devsite-selector> <section> <h3>Swift</h3> <p><pre class="prettyprint lang-swift"> var delegate = CoreMLDelegate() if delegate == nil { delegate = MetalDelegate() // Add Metal delegate options if necessary. }let interpreter = try Interpreter(modelPath: modelPath,
delegates: [delegate!])
</pre></p> </section> <section> <h3>Objective-C</h3> <p><pre class="prettyprint lang-objc"> TFLDelegate* delegate = [[TFLCoreMLDelegate alloc] init]; if (!delegate) { // Add Metal delegate options if necessary delegate = [[TFLMetalDelegate alloc] init]; } // Initialize interpreter with delegate </pre></p> </section> <section> <h3>C</h3> <p><pre class="prettyprint lang-c"> TfLiteCoreMlDelegateOptions options = {}; delegate = TfLiteCoreMlDelegateCreate(&options); if (delegate == NULL) { // Add Metal delegate options if necessary delegate = TFLGpuDelegateCreate(NULL); } // Initialize interpreter with delegate </pre></p> </section> </devsite-selector>
</div>The delegate creation logic reads device's machine id (e.g. iPhone11,1) to determine its Neural Engine availability. See the code for more detail. Alternatively, you can implement your own set of denylist devices using other libraries such as DeviceKit.
Although iOS 13 supports Core ML 3, the model might work better when it is
converted with Core ML 2 model specification. The target conversion version is
set to the latest version by default, but you can change this by setting
coreMLVersion (in Swift, coreml_version in C API) in the delegate option to
older version.
Following ops are supported by the Core ML delegate.
[B, C, H, W], [B, C, 1, 1], [B, 1, H, W], [B, 1, 1, 1].REFLECT mode is supported. Padding should be
constant, and is only allowed for H and W dimensions.[B, C, H, W], [B, C, 1, 1], [B, 1, H, W], [B, 1, 1, 1].For issues, please create a GitHub issue with all the necessary details to reproduce.