finetune/readme.md
We offer the official scripts for easy finetuning of the pretrained MiniCPM-V 4.0, MiniCPM-o 2.6, MiniCPM-V 2.6, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0 on downstream tasks. Our finetune scripts use transformers Trainer and DeepSpeed by default.
To prepare your fine-tuning data, you should formulate each sample as a dictionary consisting of an id, an image path (or list of images), and a list of conversations. Then, save the data samples in JSON files.
For vision-language tasks, you must provide placeholders like <image> or <image_XX> to define where to insert the image embeddings within the conversation. If no placeholder is provided, the image will be placed at the front of the conversation by default.
If your input consists of a single image, you can use a single placeholder <image> to indicate where the image should be inserted in the conversation.
<details> <summary> <b>Single image example (vl_finetune_data.json) with 1 samples.</b> </summary> [
{
"id": "0",
"image": 'path/to/image_0.jpg',
"conversations": [
{
'role': 'user',
'content': '<image>\nHow many desserts are on the white plate?'
},
{
'role': 'assistant',
'content': 'There are three desserts on the white plate.'
},
{
'role': 'user',
'content': 'What type of desserts are they?'
},
{
'role': 'assistant',
'content': 'The desserts are cakes with bananas and pecans on top. They share similarities with donuts, but the presence of bananas and pecans differentiates them.'
},
{
'role': 'user',
'content': 'What is the setting of the image?'},
{
'role': 'assistant',
'content': 'The image is set on a table top with a plate containing the three desserts.'
},
]
},
]
For inputs containing multiple images, utilize a dictionary where each key represents a unique placeholder (e.g., <image_00>, **<image_01**) with the corresponding image path as its value. These placeholders can then be used within the conversation to seamlessly insert images at specific positions.
Additionally, to optimize resource management, especially when dealing with large batches of images during training or inference, consider reducing max_slice_nums. For example, in version 2.6, a single image is represented by 64 tokens. When slice=9, an image with a maximum resolution of 1344x1344 will consume nearly 64*(9+1) tokens. To minimize the number of tokens used per image, you can set slice=1, resulting in a single image being represented by 64 tokens.
If the total token count exceeds max_length, truncation will be applied. For multi-image supervised fine-tuning (SFT), it's recommended to set MODEL_MAX_LENGTH=4096 in your script for better performance.
[
{
"id": "0",
"image": {
"<image_00>": "path/to/image_0.jpg",
"<image_01>": "path/to/image_1.jpg",
"<image_02>": "path/to/image_2.jpg",
"<image_03>": "path/to/image_3.jpg"
},
"conversations": [
{
"role": "user",
"content": "How to create such text-only videos using CapCut?\n<image_00>\n<image_01>\n<image_02>\n<image_03>\n"
},
{
"role": "assistant",
"content": "To create a text-only video as shown in the images, follow these steps in CapCut..."
}
]
}
]
Full-parameter parameter finetuning requires updating all parameters of LLM in the whole training process. Please specify the correct MODEL path, DATA path and LLM_TYPE in the shell scripts.
MODEL="MiniCPM-o-2_6" # or "openbmb/MiniCPM-V-2_6", "openbmb/MiniCPM-Llama3-V-2_5", "openbmb/MiniCPM-V-2"
DATA="path/to/training_data.json"
EVAL_DATA="path/to/test_data.json"
LLM_TYPE="qwen" # llama for MiniCPM-V-4, minicpm for MiniCPM-V-2, llama3 for MiniCPM-Llama3-V-2_5, qwen for MiniCPM-o-2_6/MiniCPM-V-2_6
To launch your training, run the following script:
sh finetune_ds.sh
The LoRA allows light-weight model tuning with only a small subset of parameters updated. We provide the LoRA implementation based on peft. To launch your training, run the following script:
sh finetune_lora.sh
After training, you could load the model with the path to the adapter. We advise you to use absolute path for your pretrained model. This is because LoRA only saves the adapter and the absolute path in the adapter configuration json file is used for finding out the pretrained model to load.
from peft import PeftModel
from transformers import AutoModel
model_type= ""openbmb/MiniCPM-o-2_6" or # openbmb/MiniCPM-V-2_6", openbmb/MiniCPM-Llama3-V-2_5, openbmb/MiniCPM-V-2
path_to_adapter="path_to_your_fine_tuned_checkpoint"
model = AutoModel.from_pretrained(
model_type,
trust_remote_code=True
)
lora_model = PeftModel.from_pretrained(
model,
path_to_adapter,
device_map="auto",
trust_remote_code=True
).eval().cuda()
The following table presents the memory usage of the model when fine-tuning using NVIDIA A100 (80GiB) GPUs under different numbers of GPUs. The fine-tuning was performed with the DeepSpeed Zero-3 optimization, Gradient Checkpointing techniques and offloading optimizer as well as parameters memory to cpu, with a maximum length set to 2048 and batch size set to 1. You refer to deepspeed zero stage to reduce memory cost.
| Fine-tuning Method | GPUs: 2 | GPUs: 4 | GPUs: 8 |
|---|---|---|---|
| LoRA Fine-tuning | 14.4 GiB | 13.6 GiB | 13.1 GiB |
| Full Parameters Fine-tuning | 16.0 GiB | 15.8 GiB | 15.63GiB |
A:When you face Out of Memory (OOM) issues during training large models, the following strategies may help resolve or mitigate the problem:
max_model_length: Decreasing the maximum sequence length the model processes can significantly reduce the memory required for each operation. For example, reducing the maximum length from 2048 to 1200 or another value suitable for your dataset.--model_max_length 1200
batch_size: Reducing the amount of data processed in each batch helps decrease memory consumption.--batch_size 1
slice): When handling large datasets such as large images files, reducing the number of slices processed each time can lower memory requirements.--max_slice_nums 9
--tune_vision false
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
}
}
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
}
}
You can visit huggingface deepspeed to find out more about how to use DeepSpeed.
</details> <details> <summary>Q: Encounter an error while using the AutoPeftModelForCausalLM to load a checkpoint that has undergone lora fine-tuning</summary>A: The error as described in issues 168 occurs because the model lacks get_input_embeddings and set_input_embeddings methods. Follow these steps to resolve this issue:
1.Reload the Fine-Tuned Model: Make sure you correctly load the checkpoint that has been fine-tuned using lora techniques. Use the following code example to guide you:
from peft import AutoPeftModel
path_to_adapter="path_to_your_fine_tuned_checkpoint"
model = AutoPeftModel.from_pretrained(
# path to the output directory
path_to_adapter,
device_map="auto",
trust_remote_code=True
).eval().cuda()
2.Update the model_minicpmv.py File:
model_minicpmv.py file to ensure it is the latest version.model_minicpmv.py file into your project. This file is available from the following sources:
A: If your environment supports flash_attn2, you can add an argument _attn_implementation="flash_attention_2" when using the AutoModel.from_pretrained method to load a model. For example:
model = AutoModel.from_pretrained('model_name', _attn_implementation="flash_attention_2")
A: Our model supports up to 1344x1344 lossless encoding. If you are currently resizing your images to 512, you might want to try using the original image sizes instead. Our system automatically includes a high-definition image encoding scheme by default.
</details> <details> <summary>Q: What should we do if we encounter out-of-memory (OOM) errors?</summary>A: If you experience OOM issues, consider reducing the batch size (bs). To maintain an equivalent total batch size, you can adjust the gradient_accumulation_steps setting. This approach allows you to manage memory usage effectively while still processing the desired amount of data per training step.
A: I recommend using this function here to sample the length of your training data. Note that the input_ids length includes the image portion. Once you determine the maximum length, you can specify it in the startup command using --model_max_length xxx.
Additionally, if you prefer not to train the vision encoder, you can add --tune_vision false to your command.
A: You can refer to the LoRA documentation for guidance on adjusting your training hyperparameters when using LoRA. This documentation provides detailed information on configuring various parameters specific to the LoRA adaptation technique.
</details>To tailor the training process according to your specific requirements, you can adjust various hyperparameters. For comprehensive documentation on available hyperparameters and their functionalities, you can refer to the official Transformers documentation and Lora documentation. Experimentation and fine-tuning of these parameters are essential for achieving optimal model performance tailored to your specific task and dataset.