Back to Fastchat

ExllamaV2 GPTQ Inference Framework

docs/exllama_v2.md

0.2.362.8 KB
Original Source

ExllamaV2 GPTQ Inference Framework

Integrated ExllamaV2 customized kernel into Fastchat to provide Faster GPTQ inference speed.

Note: Exllama not yet support embedding REST API.

Install ExllamaV2

Setup environment (please refer to this link for more details):

bash
git clone https://github.com/turboderp/exllamav2
cd exllamav2
pip install -e .

Chat with the CLI:

bash
python3 -m fastchat.serve.cli \
    --model-path models/vicuna-7B-1.1-GPTQ-4bit-128g \
    --enable-exllama

Start model worker:

bash
# Download quantized model from huggingface
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g models/vicuna-7B-1.1-GPTQ-4bit-128g

# Load model with default configuration (max sequence length 4096, no GPU split setting).
python3 -m fastchat.serve.model_worker \
    --model-path models/vicuna-7B-1.1-GPTQ-4bit-128g \
    --enable-exllama

#Load model with max sequence length 2048, allocate 18 GB to CUDA:0 and 24 GB to CUDA:1.
python3 -m fastchat.serve.model_worker \
    --model-path models/vicuna-7B-1.1-GPTQ-4bit-128g \
    --enable-exllama \
    --exllama-max-seq-len 2048 \
    --exllama-gpu-split 18,24

--exllama-cache-8bit can be used to enable 8-bit caching with exllama and save some VRAM.

Performance

Reference: https://github.com/turboderp/exllamav2#performance

ModelModeSizegrpszactV1: 3090TiV1: 4090V2: 3090TiV2: 4090
LlamaGPTQ7B128no143 t/s173 t/s175 t/s195 t/s
LlamaGPTQ13B128no84 t/s102 t/s105 t/s110 t/s
LlamaGPTQ33B128yes37 t/s45 t/s45 t/s48 t/s
OpenLlamaGPTQ3B128yes194 t/s226 t/s295 t/s321 t/s
CodeLlamaEXL2 4.0 bpw34B----42 t/s48 t/s
Llama2EXL2 3.0 bpw7B----195 t/s224 t/s
Llama2EXL2 4.0 bpw7B----164 t/s197 t/s
Llama2EXL2 5.0 bpw7B----144 t/s160 t/s
Llama2EXL2 2.5 bpw70B----30 t/s35 t/s
TinyLlamaEXL2 3.0 bpw1.1B----536 t/s635 t/s
TinyLlamaEXL2 4.0 bpw1.1B----509 t/s590 t/s