Skip to content

Commit

Permalink
Fix guided decoding crashes (#811)
Browse files Browse the repository at this point in the history
This PR mostly ports vllm-project#11389 to
the design introduced by #358
and makes the custom caching code a little bit more robust.
Currently there are two problems with guided decode:
- `mask[list(allowed_tokens)] = 0` is causing crashes due to
allowed_tokens containing tensors. Pretty easy fix.
- The value type of `self._fsm_state` was changed from `int` to union of
`int` and `outlines.state.CFGState`, which may cause
`self._cached_get_mask_tensor(state_id, scores.size(-1), scores.device)`
to crash, as `outlines.state.CFGState` is not hashable. This PR changes
the caching mechanism so that if function arguments are not hashable,
their id is taken as key. This might cause some cache misses, but that's
better than crashing, as it does right now.
None of the above is problem on upstream, as this stems from code
introduced in #358.
I've also added guided decode tests to CI suite.
  • Loading branch information
kzawora-intel authored Feb 12, 2025
1 parent e8f66d5 commit 4d91f3b
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 11 deletions.
5 changes: 5 additions & 0 deletions .jenkins/test_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -91,3 +91,8 @@ stages:
- name: test_gptq
flavor: g2
command: VLLM_SKIP_WARMUP=true pytest -v tests/quantization/test_gptq.py::test_gptq
- name: tests_guided_decode
steps:
- name: test_lazy_outlines
flavor: g2
command: pip install -e tests/vllm_test_utils && pytest -v tests/entrypoints/llm/test_lazy_outlines.py
7 changes: 0 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -47,13 +47,6 @@ repos:
types: [python]
additional_dependencies: &mypy_deps [mypy==1.11.1, types-setuptools, types-PyYAML, types-requests]
stages: [pre-commit] # Don't run in CI
- id: mypy-3.9 # TODO: Use https://github.com/pre-commit/mirrors-mypy when mypy setup is less awkward
name: Run mypy for Python 3.9
entry: tools/mypy.sh 1 "3.9"
language: python
types: [python]
additional_dependencies: *mypy_deps
stages: [manual] # Only run in CI
- id: mypy-3.10 # TODO: Use https://github.com/pre-commit/mirrors-mypy when mypy setup is less awkward
name: Run mypy for Python 3.10
entry: tools/mypy.sh 1 "3.10"
Expand Down
31 changes: 27 additions & 4 deletions vllm/model_executor/guided_decoding/outlines_logits_processors.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
import json
import math
from collections import defaultdict
from collections.abc import Hashable, Iterable
from functools import lru_cache
from typing import Any, Callable, DefaultDict, Dict, List, Union

Expand All @@ -36,12 +37,28 @@
def _cached(fn):
cache: Dict[Any, Any] = {}

def hash_args(obj):
match obj:
case Iterable():
# NOTE(kzawora): be careful not to hash genexpr directly
# (e.g hash(hash_args(item) for item in obj))
# hashing different generator expressions can yield the
# same hash (and vice versa)
# see https://stackoverflow.com/q/38174211
# this is why we hash the tuple, not genexpr here
return hash(tuple(hash_args(item) for item in obj))
case Hashable():
return hash(obj)
case _:
return hash(id(obj))

def cached_fn(*args):
if args in cache:
result = cache[args]
cache_key = hash_args(args)
if cache_key in cache:
result = cache[cache_key]
else:
result = fn(*args)
cache[args] = result
cache[cache_key] = result
return result

return cached_fn
Expand All @@ -60,7 +77,13 @@ def __init__(self, guide: Guide):
@lru_cache(maxsize=128)
def _create_mask_tensor(allowed_tokens, vocab_size, device):
mask = torch.full((vocab_size, ), -math.inf, device=device)
mask[list(allowed_tokens)] = 0
# The tokenizer may support more token ids than the model can generate,
# eg. Llama 3.2 Vision models have an `<|image|>` token with id 128256
# but scores.shape == torch.Size([128256])
allowed_tokens = torch.tensor(allowed_tokens, device=device)
allowed_tokens = allowed_tokens.masked_select(
allowed_tokens < vocab_size)
mask.index_fill_(0, allowed_tokens, 0)
return mask

def _get_mask_tensor(self, state_id, vocab_size, device):
Expand Down

0 comments on commit 4d91f3b

Please sign in to comment.