Back to Tensorrt

README

tools/tensorflow-quantization/examples/mobilenet/README.md

23.082.4 KB
Original Source

About

This script presents a QAT end-to-end workflow (TF2-to-ONNX) for MobileNet models in tf.keras.applications.

Contents

RequirementsWorkflowResults

Requirements

Install base requirements and prepare data. Please refer to examples' README.

Workflow

Step 1: Model Quantization and Fine-tuning

Similar to ResNet: different model and different input pre-processing (mobilenet).

Please run the following to quantize, fine-tune, and save the final graph in SavedModel format (checkpoints are also saved).

sh
python run_qat_workflow.py

Step 2: Conversion to ONNX

Step 1 already does the conversion from SavedModel to ONNX automatically. For manual steps, please see step 3 in EfficientNet's README.

Step 3: TensorRT Deployment

Please refer to the examples' README.

Results

Results obtained on NVIDIA's A100 GPU and TensorRT 8.4.10.1.

MobileNet-v1

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline70.601.9970.600.32
PTQ--69.310.16
QAT70.51 (ep2)50.4970.430.16

Note: no residual connections exist in MobileNet-v1.

MobileNet-v2

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline71.773.7171.770.55
PTQ--70.870.30
QAT71.68 (ep1)74.2771.620.30

Note: residual connections exist in MobileNet-v2.

Notes

  • QAT fine-tuning hyper-parameters:
    • Optimizer: piecewise_sgd, lr_schedule=[(1.0, 1), (0.1, 2), (0.01, 7)] (default)
    • Hyper-parameters: bs=64, ep=10, lr=0.001
  • PTQ calibration: bs=64.
  • MobileNet-v3 might not show good acceleration in TensorRT due to its architecture (Conv->BN->((Add->Clip->Mul), ())->Mul), which is not a kernel fusion in TRT.