docs/source/en/quantization/sinq.md
Sinkhorn-Normalized Quantization (SINQ) is a fast, plug-and-play, model-agnostic quantization technique delivering state-of-the-art performance for Large Language Models without sacrificing accuracy.
| Feature | SINQ | HQQ | A-SINQ | AWQ |
|---|---|---|---|---|
| ๐ฏ Calibration | Calibration-free | Calibration-free | Calibrated | Calibrated |
| ๐งฎ Quantization Type | Symmetric & Asymmetric | Asymmetric only | Symmetric & Asymmetric | Symmetric & Asymmetric |
| ๐ฆ NF4 Support | Yes | No | Yes | No |
| โก Quantization Speed | ~2ร Faster than HQQ | Slower | ~4ร Faster than AWQ | Slower |
| ๐ Model Quality | Higher | Lower | Higher | Lower |
๐ Want to know more?
First, install the package. It can be done in two ways:
pip install sinq
Quantizing any ๐ค Hugging Face model with SINQ is simple and takes only a few lines of code.
First, create a [SinqConfig] and specify the following parameters:
| Flag | Description | Type | Options | Default |
|---|---|---|---|---|
--nbits | Bit-width for weight quantization | int | 2, 3, 4, 5, 6, 8 | 4 |
--tiling_mode | Weight matrix tiling strategy | str | 1D, 2D | 1D |
--group_size | Weights per quantization group | int | 64, 128 | 64 |
--method | Quantization method | str | sinq, asinq | sinq |
--modules_to_not_convert | List of the layers that are NOT quantize | List of str | [lm_head, ...] | [lm_head] |
Then specify the model you want to quantize and pass the SinqConfig as quantization configuration option
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, SinqConfig
model_name = "Qwen/Qwen3-1.7B"
cfg = SinqConfig(
nbits=4,
group_size=64,
tiling_mode="1D",
method="sinq",
modules_to_not_convert=["lm_head"]
)
tok = AutoTokenizer.from_pretrained(model_name)
qmodel = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=cfg,
dtype=torch.bfloat16
)
โ Thatโs it. Your model is now quantized with SINQ and ready for inference or saving.
Check our official SINQ github repository to stay updated!
If you want to reuse a quantized model later, save it to disk or push it on the HuggingFace Hub and reload it without needing base FP weights. If you installed SINQ from source you should call patch_hf_pretrained_io function when re-loading a quantized model:
# Save sinq quantized model
model.save_pretrained("/path/to/save/qwen3-1.7B-sinq-4bit")
model.push_to_hub("HF_Hub_username/qwen3-1.7B-sinq-4bit")
tokenizer.push_to_hub("HF_Hub_username/qwen3-1.7B-sinq-4bit")
from sinq.hf_io import patch_hf_pretrained_io
patch_hf_pretrained_io()
# Reload a sinq quantized model
hf_hub_model = "HF_Hub_username/qwen3-1.7B-sinq-4bit"
tokenizer = AutoTokenizer.from_pretrained(hf_hub_model)
model = AutoModelForCausalLM.from_pretrained(hf_hub_model)
Otherwise, if you installed SINQ through pip, you can simply use HF built-in functions:
# --- Save to a folder (sharded safetensors) ---
# 'model' must already be SINQ-quantized
# Locally save
qmodel.save_pretrained("/path/to/save/qwen3-1.7B-sinq-4bit")
# Push to the Hub
qmodel.push_to_hub("HF_Hub_username/qwen3-1.7B-sinq-4bit")
tok.push_to_hub("HF_Hub_username/qwen3-1.7B-sinq-4bit")
# --- Reload later--
save_dir = "/path/to/save/qwen3-1.7B-sinq-4bit"
hf_hub_model = "HF_Hub_username/qwen3-1.7B-sinq-4bit"
# From local directory
tok = AutoTokenizer.from_pretrained(save_dir)
qmodel = AutoModelForCausalLM.from_pretrained(save_dir)
# From HF Hub
tok = AutoTokenizer.from_pretrained(hf_hub_model)
qmodel = AutoModelForCausalLM.from_pretrained(hf_hub_model)
โ Your model is now loaded and ready for inference!
Note: If the model has been quantized in 4 bit and
gemlitelibrary is installed, gemlite faster kernel is used to run the inference.
lm-eval evaluation frameworkBelow is a minimal example showing how to evaluate a SINQ-quantized model on a benchmark dataset:
from lm_eval import evaluator
from lm_eval.models.huggingface import HFLM
# Wrap the already quantized model and tokenizer with HFLM
lm = HFLM(pretrained=qmodel, tokenizer=tok, device=device)
device = "cuda:0"
# Evaluate (many tasks available on lm-eval such as MMLU and HellaSwag)
results = evaluator.simple_evaluate(
model=lm,
tasks=["wikitext"], # small and fast benchmark
device=device
)
If you find SINQ useful in your research or applications
@misc{muller2025sinq,
title={SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights},
author={Lorenz K. Muller and Philippe Bich and Jiawei Zhuang and Ahmet Celik and Luca Benfenati and Lukas Cavigelli},
year={2025},
eprint={2509.22944},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={http://arxiv.org/abs/2509.22944}
}
Currently, the A-SINQ method is not supported in Hugging Face. Please refer to the official SINQ repository to quantize a model with this strategy. At the moment the SINQ quantization strategy and SINQ quantized models do not support Multi-GPU option, so if your system counts multiple GPUs please specify which one should be used.