Skip to content

Commit

Permalink
update todo
Browse files Browse the repository at this point in the history
  • Loading branch information
ElizaWszola committed Aug 29, 2024
1 parent d8feb8d commit 3676621
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions vllm/model_executor/models/mixtral.py
Original file line number Diff line number Diff line change
Expand Up @@ -450,8 +450,8 @@ def __init__(
lora_config: Optional[LoRAConfig] = None,
) -> None:
super().__init__()
# TODO keep the fused mixtral_quant codepath around as long as we don't
# support all quant_types
# TODO keep the unfused mixtral_quant-like codepath around as long as
# we don't support all quant_types
self.is_compressed = isinstance(quant_config, CompressedTensorsConfig)
self.use_fused_moe = (
self.is_compressed
Expand Down

0 comments on commit 3676621

Please sign in to comment.