Finetuning Transformer Models
Master the art of LLM finetuning with LoRA, QLoRA, and Hugging Face. Learn how to prepare, train and optimize models for specific tasks efficiently.
About this course
Finetuning is an essential skill in the world of Large Language Models (LLMs), allowing you to customize pre-trained transformer models for specific tasks. This course will guide you through the practical process of finetuning using popular tools like LoRA, quantization, and QLoRA, using Hugging Face libraries and the popular Mistral series of open-weight LLMs. You’ll learn about the life cycle of finetuning, GPU usage in deep learning, and parameter-efficient finetuning (PEFT).
Built in partnership with
Skills you’ll gain
Learn how to prepare training data for finetuning
Understand how to perform full finetunes using Hugging Face
Get hands-on experience with PEFT and 4-bit quantization techniques
There are no reviews yet.