body-segmentation/src/selfie_segmentation_tfjs/README.md
MediaPipe SelfieSegmentation-TFJS uses TF.js runtime to execute the model, the preprocessing and postprocessing steps.
Two variants of the model are offered.
To use MediaPipe SelfieSegmentation, you need to first select a runtime (TensorFlow.js or MediaPipe). This guide is for TensorFlow.js runtime. The guide for MediaPipe runtime can be found here.
Via script tags:
<!-- Require the peer dependencies of body-segmentation. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
<!-- You must explicitly require a TF.js backend if you're not using the TF.js union bundle. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-webgl"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/body-segmentation"></script>
Via npm:
yarn add @tensorflow-models/body-segmentation
yarn add @tensorflow/tfjs-core, @tensorflow/tfjs-converter
yarn add @tensorflow/tfjs-backend-webgl
If you are using the Body Segmentation API via npm, you need to import the libraries first.
import * as bodySegmentation from '@tensorflow-models/body-segmentation';
import * as tf from '@tensorflow/tfjs-core';
// Register WebGL backend.
import '@tensorflow/tfjs-backend-webgl';
Pass in bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation from the
bodySegmentation.SupportedModel enum list along with a segmenterConfig to the
createSegmenter method to load and initialize the model.
segmenterConfig is an object that defines MediaPipeSelfieSegmentation specific configurations for MediaPipeSelfieSegmentationTfjsModelConfig:
runtime: Must set to be 'tfjs'.
modelType: specify which variant to load from MediaPipeSelfieSegmentationModelType (i.e.,
'general', 'landscape'). If unset, the default is 'general'.
modelUrl: An optional string that specifies custom url of
the segmentation model. This is useful for area/countries that don't have access to the model hosted on tf.hub. It also accepts io.IOHandler which can be used with
tfjs-react-native
to load model from app bundle directory using
bundleResourceIO.
const model = bodySegmentation.SupportedModels.MediaPipeSelfieSegmentation;
const segmenterConfig = {
runtime: 'tfjs',
};
segmenter = await bodySegmentation.createSegmenter(model, segmenterConfig);
Now you can use the segmenter to segment people. The segmentPeople method
accepts both image and video in many formats, including:
HTMLVideoElement, HTMLImageElement, HTMLCanvasElement, ImageData, Tensor3D. If you want more
options, you can pass in a second segmentationConfig parameter.
segmentationConfig is an object that defines MediaPipe SelfieSegmentation specific configurations for MediaPipeSelfieSegmentationTfjsSegmentationConfig:
The following code snippet demonstrates how to run the model inference:
const segmentationConfig = {flipHorizontal: false};
const people = await segmenter.segmentPeople(image, segmentationConfig);
The returned people array contains a single element only, where all the people segmented in the image are found in that single segmentation element.
The only label returned by the maskValueToLabel function by the model is 'person'.
Please refer to the Body Segmentation API
README
about the structure of the returned people array.