LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
-
Updated
Jul 24, 2023 - Jupyter Notebook
LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch
[SIGIR'24] The official implementation code of MOELoRA.
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Advanced AI-driven tool for generating unique video game characters using Stable Diffusion, DreamBooth, and LoRa adaptations. Enhances creativity with customizable, high-quality character designs, tailored specifically for game developers and artists.
Music-gen model fine-tuned to generate music in the style of the Violet Evergarden Original Soundtrack.
A Low-Rank Adaptation of a pretrained Stable Diffusion model that generates background scenery. Trained with PyTorch, and deployed with AWS EC2 and Ngrok.
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
Low Rank Approximation (Adaptation) Methods in Neural Networks
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."