Back to Peft

Fine-tuning for image classification using LoRA and 🤗 PEFT

examples/image_classification/README.md

0.19.11.4 KB
Original Source

Fine-tuning for image classification using LoRA and 🤗 PEFT

Vision Transformer model from transformers

We provide a notebook (image_classification_peft_lora.ipynb) where we learn how to use LoRA from 🤗 PEFT to fine-tune an image classification model by ONLY using 0.7% of the original trainable parameters of the model.

LoRA adds low-rank "update matrices" to certain blocks in the underlying model (in this case the attention blocks) and ONLY trains those matrices during fine-tuning. During inference, these update matrices are merged with the original model parameters. For more details, check out the original LoRA paper.

PoolFormer model from timm

The notebook image_classification_timm_peft_lora.ipynb showcases fine-tuning an image classification model using from the timm library. Again, LoRA is used to reduce the numberof trainable parameters to a fraction of the total.