My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
-
Updated
Jul 31, 2024 - Jupyter Notebook
My lab work of “Generative AI with Large Language Models” course offered by DeepLearning.AI and Amazon Web Services on coursera.
Music-gen model fine-tuned to generate music in the style of the Violet Evergarden Original Soundtrack.
This repository contains a collection of generative AI models and applications designed for various tasks such as text generation, image synthesis, and style transfer. The models leverage cutting-edge architectures like GPT, GANs, and VAEs, enabling users to explore different generative tasks.
Stability AI SD-Turbo model fine-tuned using LoRA on Magic The Gathering artwork
OhanashiGPT is an application that generates personalized children's stories based on parameters like age and preferences. It narrates these stories using an AI-generated voice that mimics a parent, trained on their audio samples. The app also creates illustrations to accompany each story, providing a unique and engaging experience for children.
Low Rank Approximation (Adaptation) Methods in Neural Networks
Tools and method for fine-tuning the Gemma 2 model on custom datasets
Explanation of Programming Errors using Open-source LLMs
A Low-Rank Adaptation of a pretrained Stable Diffusion model that generates background scenery. Trained with PyTorch, and deployed with AWS EC2 and Ngrok.
Long term project about a custom AI architecture. Consist of cutting-edge technique in machine learning such as Flash-Attention, Group-Query-Attention, ZeRO-Infinity, BitNet, etc.
Advanced AI-driven tool for generating unique video game characters using Stable Diffusion, DreamBooth, and LoRa adaptations. Enhances creativity with customizable, high-quality character designs, tailored specifically for game developers and artists.
Efficient fine-tuned large language model (LLM) for the task of sentiment analysis using the IMDB dataset.
The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
A curated list of Parameter Efficient Fine-tuning papers with a TL;DR
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
Fine tuning Mistral-7b with PEFT(Parameter Efficient Fine-Tuning) and LoRA(Low-Rank Adaptation) on Puffin Dataset(multi-turn conversations between GPT-4 and real humans)
Easy wrapper for inserting LoRA layers in CLIP.
Add a description, image, and links to the low-rank-adaptation topic page so that developers can more easily learn about it.
To associate your repository with the low-rank-adaptation topic, visit your repo's landing page and select "manage topics."