candle-examples/examples/llava/readme.md
LLaVA (Large Language-and-Vision Assistant) is an end-to-end trained large multimodal model. This example is from candle-llava
The code is based on https://github.com/haotian-liu/LLaVA, Hence the llava-hf version of config may perform differently.
Right now this has been tested on liuhaotian/llava-v1.6-vicuna-7b and
llava-hf/llava-v1.6-vicuna-7b-hf. Memory usage might have room for optimization.
The llava-hf models contain a tokenizer.json file so can be used directly with
the -hf command line flag.
For the original llava models, you can use the following code to generate the tokenizer.json file.
conda create -n llava python=3.10
pip install transformers protobuf
conda activate llava
python -c "from transformers import AutoTokenizer;tokenizer=AutoTokenizer.from_pretrained('liuhaotian/llava-v1.6-vicuna-7b');tokenizer.save_pretrained('tokenizer')"
Then the tokenizer.json file should be in tokenizer/tokenizer.json (which is the default path).
cargo run --example llava --features cuda -- --image-file "llava_logo.png" --prompt "is this a cat?" --hf # default args, use llava-hf/llava-v1.6-vicuna-7b-hf. image-file is required^_^
cargo run --example llava --features cuda -- --model-path liuhaotian/llava-v1.6-vicuna-7b --image-file "llava_logo.png" --prompt "is this a cat?" # use liuhaotian/llava-v1.6-vicuna-7b, tokenizer setup should be done