Back to Models

MobileNet

research/slim/nets/mobilenet/README.md

2.20.015.0 KB
Original Source

MobileNet

This folder contains building code for MobileNetV2 and MobilenetV3 networks. The architectural definition for each model is located in mobilenet_v2.py and mobilenet_v3.py respectively.

For MobilenetV1 please refer to this page

We have also introduced a family of MobileNets customized for the Edge TPU accelerator found in Google Pixel4 devices. The architectural definition for MobileNetEdgeTPU is located in mobilenet_v3.py

Performance

Mobilenet V3 latency

This is the timing of MobileNetV2 vs MobileNetV3 using TF-Lite on the large core of Pixel 1 phone.

MACs

MACs, also sometimes known as MADDs - the number of multiply-accumulates needed to compute an inference on a single image is a common metric to measure the efficiency of the model. Full size Mobilenet V3 on image size 224 uses ~215 Million MADDs (MMadds) while achieving accuracy 75.1%, while Mobilenet V2 uses ~300MMadds and achieving accuracy 72%. By comparison ResNet-50 uses approximately 3500 MMAdds while achieving 76% accuracy.

Below is the graph comparing Mobilenets and a few selected networks. The size of each blob represents the number of parameters. Note for ShuffleNet there are no published size numbers. We estimate it to be comparable to MobileNetV2 numbers.

Mobilenet EdgeTPU latency

The figure below shows the Pixel 4 Edge TPU latency of int8-quantized Mobilenet EdgeTPU compared with MobilenetV2 and the minimalistic variants of MobilenetV3 (see below).

Pretrained models

Mobilenet V3 Imagenet Checkpoints

All mobilenet V3 checkpoints were trained with image resolution 224x224. All phone latencies are in milliseconds, measured on large core. In addition to large and small models this page also contains so-called minimalistic models, these models have the same per-layer dimensions characteristic as MobilenetV3 however, they don't utilize any of the advanced blocks (squeeze-and-excite units, hard-swish, and 5x5 convolutions). While these models are less efficient on CPU, we find that they are much more performant on GPU/DSP.

Imagenet CheckpointMACs (M)Params (M)Top1Pixel 1Pixel 2Pixel 3
Large dm=1 (float)2175.475.251.26144
Large dm=1 (8-bit)2175.473.94442.532
Large dm=0.75 (float)1554.073.339.84834
Small dm=1 (float)662.967.515.819.414.4
Small dm=1 (8-bit)662.964.915.51510.7
Small dm=0.75 (float)442.465.412.815.911.6

Minimalistic checkpoints:

Imagenet CheckpointMACs (M)Params (M)Top1Pixel 1Pixel 2Pixel 3
Large minimalistic (float)2093.972.344.15135
Large minimalistic (8-bit)2093.971.3373527
Small minimalistic (float)652.061.912.215.111

Edge TPU checkpoints:

Imagenet CheckpointMACs (M)Params (M)Top1Pixel 4 Edge TPUPixel 4 CPU
MobilenetEdgeTPU dm=0.75 (8-bit)6242.973.53.113.8
MobilenetEdgeTPU dm=1 (8-bit)9904.075.63.620.6

Note: 8-bit quantized versions of the MobilenetEdgeTPU models were obtained using Tensorflow Lite's post training quantization tool.

Mobilenet V2 Imagenet Checkpoints

Classification CheckpointQuantizedMACs (M)Parameters (M)Top 1 AccuracyTop 5 AccuracyMobile CPU (ms) Pixel 1
float_v2_1.4_224uint85826.0675.092.5138.0
float_v2_1.3_224uint85095.3474.492.1123.0
float_v2_1.0_224uint83003.4771.891.073.8
float_v2_1.0_192uint82213.4770.790.155.1
float_v2_1.0_160uint81543.4768.889.040.2
float_v2_1.0_128uint8993.4765.386.927.6
float_v2_1.0_96uint8563.4760.383.217.6
float_v2_0.75_224uint82092.6169.889.655.8
float_v2_0.75_192uint81532.6168.788.941.6
float_v2_0.75_160uint81072.6166.487.330.4
float_v2_0.75_128uint8692.6163.285.321.9
float_v2_0.75_96uint8392.6158.881.614.2
float_v2_0.5_224uint8971.9565.486.428.7
float_v2_0.5_192uint8711.9563.985.421.1
float_v2_0.5_160uint8501.9561.083.214.9
float_v2_0.5_128uint8321.9557.780.89.9
float_v2_0.5_96uint8181.9551.275.86.4
float_v2_0.35_224uint8591.6660.382.919.7
float_v2_0.35_192uint8431.6658.281.214.6
float_v2_0.35_160uint8301.6655.779.110.5
float_v2_0.35_128uint8201.6650.875.06.9
float_v2_0.35_96uint8111.6645.570.44.5

Training

V3

The following configuration, achieves 74.6% using 8 GPU setup and 75.2% using 2x2 TPU setup.

Final Top 1 Accuracy74.6
learning_rate0.16Total learning rate. (Per clone learning rate is 0.02)
rmsprop_momentum0.9
rmsprop_decay0.9
rmsprop_epsilon0.002
learning_rate_decay_factor0.99
optimizerRMSProp
warmup_epochs5Slim uses per clone epoch, so the the flag value is 0.6
num_epochs_per_decay3Slim uses per clone epoch, so the flag value is 0.375
batch_size (per chip)192
moving_average_decay0.9999
weight_decay1e-5
init_stddev0.008
dropout_keep_prob0.8
bn_moving_average_decay0.997
bn_epsilon0.001
label_smoothing0.1

V2

The numbers above can be reproduced using slim's train_image_classifier. Below is the set of parameters that achieves 72.0% for full size MobileNetV2, after about 700K when trained on 8 GPU. If trained on a single GPU the full convergence is after 5.5M steps. Also note that learning rate and num_epochs_per_decay both need to be adjusted depending on how many GPUs are being used due to slim's internal averaging.

bash
--model_name="mobilenet_v2"
--learning_rate=0.045 * NUM_GPUS   #slim internally averages clones so we compensate
--preprocessing_name="inception_v2"
--label_smoothing=0.1
--moving_average_decay=0.9999
--batch_size= 96
--num_clones = NUM_GPUS # you can use any number here between 1 and 8 depending on your hardware setup.
--learning_rate_decay_factor=0.98
--num_epochs_per_decay = 2.5 / NUM_GPUS # train_image_classifier does per clone epochs

Example

See this ipython notebook or open and run the network directly in Colaboratory.