Skip to content

Apoorva-Udupa/Fine-Tuning_LLMs_with_LoRA_and_QLoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Fine-Tuning LLMs with LoRA and QLoRA

Project Overview

This project explores the fine-tuning of large language models (LLMs) using the LoRA (Low-Rank Adaptation) and QLoRA (Quantized Low-Rank Adaptation) techniques. These methods are designed to improve the model's performance and efficiency, making them suitable for deployment in resource-constrained environments. The notebook provides an in-depth analysis and implementation of these fine-tuning approaches.

Features

  • LoRA and QLoRA Techniques: Implements fine-tuning of LLMs using state-of-the-art methods that optimize model size and compute resources.
  • Detailed Analysis: Includes comprehensive evaluations of performance improvements and resource utilization.
  • Extensible Framework: The code can be adapted to various LLMs and fine-tuning scenarios.

Requirements

To run this notebook, you will need:

  • Python 3.7 or higher
  • PyTorch 1.8 or newer
  • Transformers library by Hugging Face

Installation

Clone the repository to your local machine:

git clone https://github.com/Apoorva-Udupa/Fine-Tuning_LLMs_with_LoRA_and_QLoRA.git
cd fine-tuning-llm-lora

About

Fine-Tuning LLMs with LoRA and QLoRA

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published