Build LLM-enabled FastAPI applications without build configuration.
-
Updated
Oct 13, 2024 - Python
Build LLM-enabled FastAPI applications without build configuration.
Explore practical fine-tuning of LLMs with Hands-on Lora. Dive into examples that showcase efficient model adaptation across diverse tasks.
A Retrieval Augmented Generator (RAG) that operates entirely locally, combining document retrieval and language model generation to provide accurate and contextually relevant responses. Built with @langchain-ai
第八届全国职工职业技能大赛人工智能训练师赛项
Tools and method for fine-tuning the Gemma 2 model on custom datasets
A Vision Language Model implemented in PyTorch
Welcome to the Exploding Population Myths 1995 repository! This project leverages the Google Gemma model to analyze and debunk common population myths from 1995 book, providing valuable insights into historical population trends.
A langchain application that use open-source gemma2:2b LLM model.
The task of this project is to Convert Natural Language to SQL Queries
Fine-tune the Gemma2B language model on a climate-related question-answer dataset to improve its domain-specific knowledge using LoRA (Low Rank Adaptation).
This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.
Add a description, image, and links to the gemma-2b topic page so that developers can more easily learn about it.
To associate your repository with the gemma-2b topic, visit your repo's landing page and select "manage topics."