Back to Transformers

MobileBERT

docs/source/en/model_doc/mobilebert.md

5.8.04.0 KB
Original Source
<!--Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

This model was released on 2020-04-06 and added to Hugging Face Transformers on 2020-11-16.

<div style="float: right;"> <div class="flex flex-wrap space-x-1">
</div>
</div>

MobileBERT

MobileBERT is a lightweight and efficient variant of BERT, specifically designed for resource-limited devices such as mobile phones. It retains BERT's architecture but significantly reduces model size and inference latency while maintaining strong performance on NLP tasks. MobileBERT achieves this through a bottleneck structure and carefully balanced self-attention and feedforward networks. The model is trained by knowledge transfer from a large BERT model with an inverted bottleneck structure.

You can find the original MobileBERT checkpoint under the Google organization.

[!TIP] Click on the MobileBERT models in the right sidebar for more examples of how to apply MobileBERT to different language tasks.

The example below demonstrates how to predict the [MASK] token with [Pipeline], [AutoModel], and from the command line.

<hfoptions id="usage"> <hfoption id="Pipeline">
python
from transformers import pipeline


pipeline = pipeline(
    task="fill-mask",
    model="google/mobilebert-uncased",
    device=0
)
pipeline("The capital of France is [MASK].")
</hfoption> <hfoption id="AutoModel">
python
import torch

from transformers import AutoModelForMaskedLM, AutoTokenizer


tokenizer = AutoTokenizer.from_pretrained(
    "google/mobilebert-uncased",
)
model = AutoModelForMaskedLM.from_pretrained(
    "google/mobilebert-uncased",
    device_map="auto",
)
inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt").to(model.device)

with torch.no_grad():
    outputs = model(**inputs)
    predictions = outputs.logits

masked_index = torch.where(inputs['input_ids'] == tokenizer.mask_token_id)[1]
predicted_token_id = predictions[0, masked_index].argmax(dim=-1)
predicted_token = tokenizer.decode(predicted_token_id)

print(f"The predicted token is: {predicted_token}")
</hfoption> </hfoptions>

Notes

  • Inputs should be padded on the right because BERT uses absolute position embeddings.

MobileBertConfig

[[autodoc]] MobileBertConfig

MobileBertTokenizer

[[autodoc]] MobileBertTokenizer

MobileBertTokenizerFast

[[autodoc]] MobileBertTokenizerFast

MobileBert specific outputs

[[autodoc]] models.mobilebert.modeling_mobilebert.MobileBertForPreTrainingOutput

MobileBertModel

[[autodoc]] MobileBertModel - forward

MobileBertForPreTraining

[[autodoc]] MobileBertForPreTraining - forward

MobileBertForMaskedLM

[[autodoc]] MobileBertForMaskedLM - forward

MobileBertForNextSentencePrediction

[[autodoc]] MobileBertForNextSentencePrediction - forward

MobileBertForSequenceClassification

[[autodoc]] MobileBertForSequenceClassification - forward

MobileBertForMultipleChoice

[[autodoc]] MobileBertForMultipleChoice - forward

MobileBertForTokenClassification

[[autodoc]] MobileBertForTokenClassification - forward

MobileBertForQuestionAnswering

[[autodoc]] MobileBertForQuestionAnswering - forward