Back to Tensorrt

README

tools/tensorflow-quantization/examples/resnet/README.md

23.083.0 KB
Original Source

About

This script presents a QAT end-to-end workflow (TF2-to-ONNX) for ResNet models in tf.keras.applications.

Contents

RequirementsWorkflowResults

Requirements

Install base requirements and prepare data. Please refer to examples' README.

Workflow

Step 1: Model Quantization and Fine-tuning

Please run the following to quantize, fine-tune, and save the final graph in SavedModel format (checkpoints are also saved).

sh
python run_qat_workflow.py

Step 2: Conversion to ONNX

Step 1 already does the conversion from SavedModel to ONNX automatically. For manual steps, please see step 3 in EfficientNet's README.

Step 3: TensorRT Deployment

Please refer to the examples' README.

Results

Results obtained on NVIDIA's A100 GPU and TensorRT 8.4 EA.

ResNet50-v1

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline75.057.9575.051.96
PTQ--74.960.46
QAT75.11 (ep5)-75.120.45

ResNet50-v2

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline75.366.1675.372.35
PTQ--75.480.57
QAT75.59 (ep5)-75.650.57

ResNet101-v1

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline76.4715.9276.483.84
PTQ--76.320.84
QAT76.33 (ep30)-76.260.84

ResNet101-v2

ModelTF (%)TF latency (ms, bs=1)TRT(%)TRT latency (ms, bs=1)
Baseline76.8914.1376.884.55
PTQ--76.941.05
QAT77.20-77.151.05

QAT fine-tuning hyper-parameters for ResNet101-v2: bs=32 (bs=64 was OOM).

Notes

  • QAT fine-tuning hyper-parameters:
    • Optimizer: piecewise_sgd, lr_schedule=[(1.0,1),(0.1,2),(0.01,7)] (default)
    • Hyper-parameters: bs=64, ep=10, lr=0.001.
    • Added QDQ nodes in Residual connection.
  • PTQ calibration: bs=64.