Skip to content
forked from nickna/Conduit

A unified API gateway for multiple LLM providers with OpenAI-compatible endpoints

Notifications You must be signed in to change notification settings

knnlabs/Conduit

 
 

Repository files navigation

ConduitLLM Logo CodeQL OpenAI Compatible Built with .NET Docker Ready

A unified API gateway for multiple LLM providers with OpenAI-compatible endpoints

Why ConduitLLM?

Are you juggling multiple LLM provider APIs in your applications? ConduitLLM solves this problem by providing:

  • Single Integration Point: Write your code once, switch LLM providers anytime
  • Vendor Independence: Avoid lock-in to any single LLM provider
  • Simplified API Management: Centralized key management and usage tracking
  • Cost Optimization: Route requests to the most cost-effective or performant models

Overview

ConduitLLM is a unified, modular, and extensible platform designed to simplify interaction with multiple Large Language Models (LLMs). It provides a single, consistent OpenAI-compatible REST API endpoint, acting as a gateway or "conduit" to various LLM backends such as OpenAI, Anthropic, Azure OpenAI, Google Gemini, Cohere, and others.

Built with .NET and designed for containerization (Docker), ConduitLLM streamlines the development, deployment, and management of LLM-powered applications by abstracting provider-specific complexities.

Key Features

  • OpenAI-Compatible REST API: Exposes a standard /v1/chat/completions endpoint for seamless integration with existing tools and SDKs
  • Multi-Provider Support: Interact with various LLM providers through a single interface
  • Model Routing & Mapping: Define custom model aliases (e.g., my-gpt4) and map them to specific provider models (e.g., openai/gpt-4)
  • Virtual API Key Management: Create and manage Conduit-specific API keys (condt_...) with built-in spend tracking
  • Streaming Support: Real-time token streaming for responsive applications
  • Web-Based User Interface: Administrative dashboard for configuration and monitoring
  • Centralized Configuration: Flexible configuration via database, environment variables, or JSON files
  • Extensible Architecture: Easily add support for new LLM providers

🏗️ Architecture

ConduitLLM follows a modular architecture with distinct components handling specific responsibilities:

flowchart LR
    Client["WebUI / Client App"]
    Http["ConduitLLM.Http(API Gateway)"]
    Core["ConduitLLM.Core(Orchestration)"]
    Providers["ConduitLLM.Providers(Provider Logic)"]
    Config["ConduitLLM.Configuration(Settings)"]
    LLM["LLM Backends(OpenAI, Anthropic, etc.)"]
    
    Client --> Http
    Http --> Core
    Core --> Providers
    Providers --> LLM
    
    Http --> Config
    Core --> Config
    Providers --> Config
Loading

Components

  • ConduitLLM.Http: OpenAI-compatible REST API gateway handling authentication and request forwarding
  • ConduitLLM.WebUI: Blazor-based admin interface for configuration and monitoring
  • ConduitLLM.Core: Central orchestration logic, interfaces, and routing strategies
  • ConduitLLM.Providers: Provider-specific implementations for different LLM services
  • ConduitLLM.Configuration: Configuration management across various sources

Quick Start

Prerequisites

  • .NET 9.0 SDK
  • (Optional) Docker Desktop for containerized deployment

Installation

  1. Clone the repository

    git clone https://github.com/knnlabs/Conduit.git
    cd Conduit/ConduitLLM.WebUI
  2. Configure LLM Providers

    • Add your provider API keys via:
      • Environment variables (see docs/Environment-Variables.md)
      • Edit appsettings.json
      • Use the WebUI after startup
  3. Start the Services

    ./start.sh
  4. Access ConduitLLM

    • Local API: http://localhost:5000
    • Local WebUI: http://localhost:5001
    • Local API Docs: http://localhost:5000/swagger (Development Mode)

    Note: When running locally via start.sh, these are the default ports. When deployed using Docker or other methods, access is typically via an HTTPS reverse proxy. Configure the CONDUIT_API_BASE_URL environment variable to the public-facing URL (e.g., https://conduit.yourdomain.com) for correct link generation.

Docker Installation

docker pull ghcr.io/knnlabs/conduit:latest

Or use with Docker Compose:

docker compose up -d

Note: The default Docker configuration assumes ConduitLLM runs behind a reverse proxy that handles HTTPS termination. The container exposes HTTP ports only.

Usage

Using the API

# Example: Chat completion request
curl http://localhost:5000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer condt_yourvirtualkey" \
  -d '{
    "model": "my-gpt4",
    "messages": [{"role": "user", "content": "Hello, world!"}]
  }'

Using with OpenAI SDKs

# Python example
from openai import OpenAI

client = OpenAI(
    api_key="condt_yourvirtualkey",
    # Use http://localhost:5000/v1 for local testing,
    # or your configured CONDUIT_API_BASE_URL for deployed instances
    base_url="http://localhost:5000/v1" 
)

response = client.chat.completions.create(
    model="my-gpt4",
    messages=[{"role": "user", "content": "Hello, world!"}]
)

Documentation

See the docs/ directory for detailed documentation:

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the terms specified in the LICENSE file.

About

A unified API gateway for multiple LLM providers with OpenAI-compatible endpoints

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • C# 78.5%
  • HTML 20.2%
  • Other 1.3%