A unified API gateway for multiple LLM providers with OpenAI-compatible endpoints
Are you juggling multiple LLM provider APIs in your applications? ConduitLLM solves this problem by providing:
- Single Integration Point: Write your code once, switch LLM providers anytime
- Vendor Independence: Avoid lock-in to any single LLM provider
- Simplified API Management: Centralized key management and usage tracking
- Cost Optimization: Route requests to the most cost-effective or performant models
ConduitLLM is a unified, modular, and extensible platform designed to simplify interaction with multiple Large Language Models (LLMs). It provides a single, consistent OpenAI-compatible REST API endpoint, acting as a gateway or "conduit" to various LLM backends such as OpenAI, Anthropic, Azure OpenAI, Google Gemini, Cohere, and others.
Built with .NET and designed for containerization (Docker), ConduitLLM streamlines the development, deployment, and management of LLM-powered applications by abstracting provider-specific complexities.
- OpenAI-Compatible REST API: Exposes a standard
/v1/chat/completions
endpoint for seamless integration with existing tools and SDKs - Multi-Provider Support: Interact with various LLM providers through a single interface
- Model Routing & Mapping: Define custom model aliases (e.g.,
my-gpt4
) and map them to specific provider models (e.g.,openai/gpt-4
) - Virtual API Key Management: Create and manage Conduit-specific API keys (
condt_...
) with built-in spend tracking - Streaming Support: Real-time token streaming for responsive applications
- Web-Based User Interface: Administrative dashboard for configuration and monitoring
- Centralized Configuration: Flexible configuration via database, environment variables, or JSON files
- Extensible Architecture: Easily add support for new LLM providers
ConduitLLM follows a modular architecture with distinct components handling specific responsibilities:
flowchart LR
Client["WebUI / Client App"]
Http["ConduitLLM.Http(API Gateway)"]
Core["ConduitLLM.Core(Orchestration)"]
Providers["ConduitLLM.Providers(Provider Logic)"]
Config["ConduitLLM.Configuration(Settings)"]
LLM["LLM Backends(OpenAI, Anthropic, etc.)"]
Client --> Http
Http --> Core
Core --> Providers
Providers --> LLM
Http --> Config
Core --> Config
Providers --> Config
- ConduitLLM.Http: OpenAI-compatible REST API gateway handling authentication and request forwarding
- ConduitLLM.WebUI: Blazor-based admin interface for configuration and monitoring
- ConduitLLM.Core: Central orchestration logic, interfaces, and routing strategies
- ConduitLLM.Providers: Provider-specific implementations for different LLM services
- ConduitLLM.Configuration: Configuration management across various sources
- .NET 9.0 SDK
- (Optional) Docker Desktop for containerized deployment
-
Clone the repository
git clone https://github.com/knnlabs/Conduit.git cd Conduit/ConduitLLM.WebUI
-
Configure LLM Providers
- Add your provider API keys via:
- Environment variables (see
docs/Environment-Variables.md
) - Edit
appsettings.json
- Use the WebUI after startup
- Environment variables (see
- Add your provider API keys via:
-
Start the Services
./start.sh
-
Access ConduitLLM
- Local API:
http://localhost:5000
- Local WebUI:
http://localhost:5001
- Local API Docs:
http://localhost:5000/swagger
(Development Mode)
Note: When running locally via
start.sh
, these are the default ports. When deployed using Docker or other methods, access is typically via an HTTPS reverse proxy. Configure theCONDUIT_API_BASE_URL
environment variable to the public-facing URL (e.g.,https://conduit.yourdomain.com
) for correct link generation. - Local API:
docker pull ghcr.io/knnlabs/conduit:latest
Or use with Docker Compose:
docker compose up -d
Note: The default Docker configuration assumes ConduitLLM runs behind a reverse proxy that handles HTTPS termination. The container exposes HTTP ports only.
# Example: Chat completion request
curl http://localhost:5000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer condt_yourvirtualkey" \
-d '{
"model": "my-gpt4",
"messages": [{"role": "user", "content": "Hello, world!"}]
}'
# Python example
from openai import OpenAI
client = OpenAI(
api_key="condt_yourvirtualkey",
# Use http://localhost:5000/v1 for local testing,
# or your configured CONDUIT_API_BASE_URL for deployed instances
base_url="http://localhost:5000/v1"
)
response = client.chat.completions.create(
model="my-gpt4",
messages=[{"role": "user", "content": "Hello, world!"}]
)
See the docs/
directory for detailed documentation:
- API Reference
- Architecture Overview
- Budget Management
- Cache Configuration
- Configuration Guide
- Dashboard Features
- Environment Variables
- Getting Started
- LLM Routing
- Multimodal Vision Support
- Provider Integration
- Virtual Keys
- WebUI Guide
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the terms specified in the LICENSE
file.