Back to Transformers

Overview

docs/source/en/quantization/overview.md

5.8.010.0 KB
Original Source
<!--Copyright 2024 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->

Overview

Quantization lowers the memory requirements of loading and using a model by storing the weights in a lower precision while trying to preserve as much accuracy as possible. Weights are typically stored in full-precision (fp32) floating point representations, but half-precision (fp16 or bf16) are increasingly popular data types given the large size of models today. Some quantization methods can reduce the precision even further to integer representations, like int8 or int4.

Transformers supports many quantization methods, each with their pros and cons, so you can pick the best one for your specific use case. Some methods require calibration for greater accuracy and extreme compression (1-2 bits), while other methods work out of the box with on-the-fly quantization.

Use the Space below to help you pick a quantization method depending on your hardware and number of bits to quantize to.

Quantization MethodOn the fly quantizationCPUCUDA GPUROCm GPUMetal (Apple Silicon)Intel GPUTorch compile()BitsPEFT Fine TuningSerializable with 🤗Transformers🤗Transformers SupportLink to library
AQLM🔴🟢🟢🔴🔴🟢🟢1/2🟢🟢🟢https://github.com/Vahe1994/AQLM
AutoRound🔴🟢🟢🔴🔴🟢🔴2/3/4/8🔴🟢🟢https://github.com/intel/auto-round
AWQ🔴🟢🟢🟢🔴🟢?4🟢🟢🟢https://github.com/casper-hansen/AutoAWQ
bitsandbytes🟢🟢🟢🟡🟡🟢🟢4/8🟢🟢🟢https://github.com/bitsandbytes-foundation/bitsandbytes
compressed-tensors🔴🟢🟢🟢🔴🔴🔴1/8🟢🟢🟢https://github.com/neuralmagic/compressed-tensors
EETQ🟢🔴🟢🔴🔴🔴?8🟢🟢🟢https://github.com/NetEase-FuXi/EETQ
Four Over Six🟢🟢🟢🔴🔴🔴🟢4🔴🟢🟢https://github.com/mit-han-lab/fouroversix
FP-Quant🟢🔴🟢🔴🔴🔴🟢4🔴🟢🟢https://github.com/IST-DASLab/FP-Quant
GGUF / GGML (llama.cpp)🟢🟢🟢🔴🟢🟢🔴1/8🔴See NotesSee Noteshttps://github.com/ggerganov/llama.cpp
GPT-QModel🔴🟢🟢🟢🟢🟢🔴2/3/4/8🟢🟢🟢https://github.com/ModelCloud/GPTQModel
HIGGS🟢🔴🟢🔴🔴🔴🟢2/4🔴🟢🟢https://github.com/HanGuo97/flute
HQQ🟢🟢🟢🔴🔴🟢🟢1/8🟢🔴🟢https://github.com/mobiusml/hqq/
Metal🟢🔴🔴🔴🟢🔴🔴2/4/8🔴🟢🟢Hub Kernels
optimum-quanto🟢🟢🟢🔴🟢🟢🟢2/4/8🔴🔴🟢https://github.com/huggingface/optimum-quanto
SINQ🟢🟢🟢🟡🟡🟡🟡2/3/4/6/8🔴🟢🟢https://github.com/huawei-csl/SINQ
FBGEMM_FP8🟢🔴🟢🔴🔴🔴🔴8🔴🟢🟢https://github.com/pytorch/FBGEMM
torchao🟢🟢🟢🔴🟡🟢4/8🟢🔴🟢https://github.com/pytorch/ao
VPTQ🔴🔴🟢🟡🔴🔴🟢1/8🔴🟢🟢https://github.com/microsoft/VPTQ
FINEGRAINED_FP8🟢🔴🟢🔴🔴🟢🔴8🔴🟢🟢Built-in
SpQR🔴🔴🟢🔴🔴🔴🟢3🔴🟢🟢https://github.com/Vahe1994/SpQR/
Quark🔴🟢🟢🟢🟢🟢?2/4/6/8/9/16🔴🔴🟢https://quark.docs.amd.com/latest/

Resources

If you are new to quantization, we recommend checking out these beginner-friendly quantization courses in collaboration with DeepLearning.AI.

User-Friendly Quantization Tools

If you are looking for a user-friendly quantization experience, you can use the following community spaces and notebooks: