docs/source/en/model_doc/bitnet.md
This model was released on 2025-04-16 and added to Hugging Face Transformers on 2025-04-28.
Trained on a corpus of 4 trillion tokens, this model demonstrates that native 1-bit LLMs can achieve performance comparable to leading open-weight, full-precision models of similar size, while offering substantial advantages in computational efficiency (memory, energy, latency).
➡️ Technical Report: BitNet b1.58 2B4T Technical Report
➡️ Official Inference Code: microsoft/BitNet (bitnet.cpp)
Several versions of the model weights are available on Hugging Face:
microsoft/bitnet-b1.58-2B-4T: Contains the packed 1.58-bit weights optimized for efficient inference. Use this for deployment.
microsoft/bitnet-b1.58-2B-4T-bf16: Contains the master weights in BF16 format. Use this only for training or fine-tuning purposes.
microsoft/bitnet-b1.58-2B-4T-gguf: Contains the model weights in GGUF format, compatible with the bitnet.cpp library for CPU inference.
BitLinear layers (BitNet framework).
subln normalization.VERY IMPORTANT NOTE ON EFFICIENCY
Please do NOT expect performance efficiency gains (in terms of speed, latency, or energy consumption) when using this model with the standard transformers library.
The current execution paths within transformers do not contain the specialized, highly optimized computational kernels required to leverage the advantages of the BitNet architecture. Running the model via transformers will likely result in inference speeds and energy usage comparable to, or potentially worse than, standard full-precision models within this framework on both CPU and GPU.
While you might observe reduced memory usage due to the quantized weights, the primary computational efficiency benefits are not accessible through this standard transformers usage path.
For achieving the efficiency benefits demonstrated in the technical paper, you MUST use the dedicated C++ implementation: bitnet.cpp.
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "microsoft/bitnet-b1.58-2B-4T"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto")
# Apply the chat template
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How are you?"},
]
chat_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
# Generate response
chat_outputs = model.generate(chat_input, max_new_tokens=50)
response = tokenizer.decode(chat_outputs[0][chat_input.shape[-1]:], skip_special_tokens=True) # Decode only the response part
print("\nAssistant Response:", response)
[[autodoc]] BitNetConfig
[[autodoc]] BitNetModel - forward
[[autodoc]] BitNetForCausalLM - forward