Pallas Inference Server
Popular repositories Loading
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python
-
triton-inference-server
triton-inference-server PublicForked from triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Python
-
unilm-yoco
unilm-yoco PublicForked from microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Python
Repositories
- vllm Public Forked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
pallas-inference/vllm’s past year of commit activity - triton-inference-server Public Forked from triton-inference-server/server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
pallas-inference/triton-inference-server’s past year of commit activity - unilm-yoco Public Forked from microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
pallas-inference/unilm-yoco’s past year of commit activity