docs/source/en/model_doc/mobilenet_v1.md
This model was released on 2017-04-17 and added to Hugging Face Transformers on 2022-11-21.
<div style="float: right;"> <div class="flex flex-wrap space-x-1"></div>
MobileNet V1 is a family of efficient convolutional neural networks optimized for on-device or embedded vision tasks. It achieves this efficiency by using depth-wise separable convolutions instead of standard convolutions. The architecture allows for easy trade-offs between latency and accuracy using two main hyperparameters, a width multiplier (alpha) and an image resolution multiplier.
You can all the original MobileNet checkpoints under the Google organization.
[!TIP] Click on the MobileNet V1 models in the right sidebar for more examples of how to apply MobileNet to different vision tasks.
The example below demonstrates how to classify an image with [Pipeline] or the [AutoModel] class.
from transformers import pipeline
pipeline = pipeline(
task="image-classification",
model="google/mobilenet_v1_1.0_224",
device=0
)
pipeline("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
import requests
import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification
image_processor = AutoImageProcessor.from_pretrained(
"google/mobilenet_v1_1.0_224",
)
model = AutoModelForImageClassification.from_pretrained(
"google/mobilenet_v1_1.0_224",
device_map="auto",
)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = image_processor(image, return_tensors="pt").to(model.device)
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax(dim=-1).item()
class_labels = model.config.id2label
predicted_class_label = class_labels[predicted_class_id]
print(f"The predicted class label is: {predicted_class_label}")
Checkpoint names follow the pattern mobilenet_v1_{depth_multiplier}_{resolution}, like mobilenet_v1_1.0_224. 1.0 is the depth multiplier and 224 is the image resolution.
While trained on images of a specific sizes, the model architecture works with images of different sizes (minimum 32x32). The [MobileNetV1ImageProcessor] handles the necessary preprocessing.
MobileNet is pretrained on ImageNet-1k, a dataset with 1000 classes. However, the model actually predicts 1001 classes. The additional class is an extra "background" class (index 0).
The original TensorFlow checkpoints determines the padding amount at inference because it depends on the input image size. To use the native PyTorch padding behavior, set tf_padding=False in [MobileNetV1Config].
from transformers import MobileNetV1Config
config = MobileNetV1Config.from_pretrained("google/mobilenet_v1_1.0_224", tf_padding=True)
The Transformers implementation does not support the following features.
output_stride values (fixed at 32). For smaller output_strides, the original implementation uses dilated convolution to prevent spatial resolution from being reduced further. (which would require dilated convolutions).output_hidden_states=True returns all intermediate hidden states. It is not possible to extract the output from specific layers for other downstream purposes.[[autodoc]] MobileNetV1Config
[[autodoc]] MobileNetV1ImageProcessor - preprocess
[[autodoc]] MobileNetV1ImageProcessorPil - preprocess
[[autodoc]] MobileNetV1Model - forward
[[autodoc]] MobileNetV1ForImageClassification - forward