Fine-tuning LLMs with LoRA
-
Updated
Jun 25, 2024 - Jupyter Notebook
Fine-tuning LLMs with LoRA
Fine-tuned FLAN T-5 using Instruction Fine-Tuning (Full), LoRA-based PEFT, and RLHF with PPO
Research project: Evaluate unsupervised text deidentification methods from "Unsupervised Text Deidentification" by Morris et al., using the WikiBio dataset and fine-tuned RoBERTa models. The goal is to compare our results with the paper’s findings.
Fine-tuning LLaMA 2 for toxicity classification using a balanced Kaggle dataset, with a focus on overcoming class imbalance, optimizing computational efficiency through PEFT and QLORA, and achieving high accuracy in detecting toxic content across multiple classes
QLORA instruction finetuning Llama 2 7b chat on Alteon CLI commands
Finetuned OpenAI gpt-3.5-turbo to mimic sarcasm. Developed a custom knowledge base for RAG system
A c++ framework on efficient training & fine-tuning LLMs
Nuvola Chatbot is a Streamlit-based web app utilizing Google Cloud's Nuvola chatbot powered by LLaMA2 models. It provides interactive assistance on Google Cloud Platform services. Customize responses using temperature, top-p, and max length settings. Easy setup with Streamlit and Replicate.
The LARGE LANGUAGE MODEL FOR HYDROGEN STORAGE project uses advanced natural language processing to improve research efficiency.
Inspired by the paper: "Searching for Best Practices in Retrieval-Augmented Generation" by Wang et al. This repository is dedicated to search for the best RAG strategy.
LLM Based medical and mental health assistant
Finetuning-LLM
Model Recipe for El-Emperador
Explanation of Programming Errors using Open-source LLMs
This repo contains influential papers which use finetuning techniques for LLMs for domain specific tasks.
This is a basic illustration of how fine-tuning can perform to the LLM model using own data set.
A JSONL generator to create training data for GPT3.5 and newer
This repository showcases Python scripts demonstrating interactions with various models using the LangChain library. From fine-tuning to custom runnables, explore examples with Gemini, Hugging Face, and Mistral AI models.
Add a description, image, and links to the finetuning-llms topic page so that developers can more easily learn about it.
To associate your repository with the finetuning-llms topic, visit your repo's landing page and select "manage topics."