Making large AI models cheaper, faster and more accessible
-
Updated
Dec 17, 2024 - Python
Making large AI models cheaper, faster and more accessible
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
🦦 Otter, a multi-modal model based on OpenFlamingo (open-sourced version of DeepMind's Flamingo), trained on MIMIC-IT and showcasing improved instruction-following and in-context learning ability.
Code and models for NExT-GPT: Any-to-Any Multimodal Large Language Model
[CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
SuperCLUE: 中文通用大模型综合性基准 | A Benchmark for Foundation Models in Chinese
Chronos: Pretrained Models for Probabilistic Time Series Forecasting
EVA Series: Visual Representation Fantasies from BAAI
DeepSeek-VL: Towards Real-World Vision-Language Understanding
Images to inference with no labeling (use foundation models to train supervised models).
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
Awesome things about LLM-powered agents. Papers / Repos / Blogs / ...
Emu Series: Generative Multimodal Models from BAAI
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
Lag-Llama: Towards Foundation Models for Probabilistic Time Series Forecasting
Janus-Series: Unified Multimodal Understanding and Generation Models
日本語LLMまとめ - Overview of Japanese LLMs
A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities
Add a description, image, and links to the foundation-models topic page so that developers can more easily learn about it.
To associate your repository with the foundation-models topic, visit your repo's landing page and select "manage topics."