Back to Developer Roadmap

Fine-tuning

src/data/roadmaps/ai-engineer/content/[email protected]

4.01.0 KB
Original Source

Fine-tuning

Fine-tuning involves taking a pre-trained large language model (LLM) and further training it on a smaller, task-specific dataset. This adapts the LLM to perform better on a particular task or domain. However, fine-tuning can be resource-intensive and may not always be the most efficient approach. Prompt engineering, retrieval-augmented generation (RAG), or using smaller, specialized models can sometimes achieve comparable or even better results with less computational overhead and data requirements.

Visit the following resources to learn more: