Skip to content

Commit

Permalink
Update the mixtral to use the better FusedMoE layer (#1081)
Browse files Browse the repository at this point in the history
  • Loading branch information
merrymercy authored Aug 13, 2024
1 parent 312e849 commit ad3e4f1
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 258 deletions.
2 changes: 1 addition & 1 deletion docs/en/model_support.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ To support a new model in SGLang, you only need to add a single file under [SGLa
Another valuable resource is the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models). vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang.

To port a model from vLLM to SGLang, you can compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical. Specifically,
- Replace vllm's `Attention` with `RadixAttention`.
- Replace vllm's `Attention` with `RadixAttention`. Note that you need to pass `layer_id` all the way to `RadixAttention`.
- Replace vllm's `LogitsProcessor` with SGLang's `LogitsProcessor`.
- Remove `Sample`.
- Change `forward()` functions, and add `input_metadata`.
Expand Down
Loading

0 comments on commit ad3e4f1

Please sign in to comment.