Skip to content

Commit

Permalink
[Doc] [1/N] Initial guide for merged multi-modal processor (vllm-proj…
Browse files Browse the repository at this point in the history
…ect#11925)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
  • Loading branch information
DarkLight1337 authored Jan 10, 2025
1 parent 241ad7b commit 12664dd
Show file tree
Hide file tree
Showing 19 changed files with 403 additions and 168 deletions.
1 change: 1 addition & 0 deletions docs/requirements-docs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ sphinx-book-theme==1.0.1
sphinx-copybutton==0.5.2
myst-parser==3.0.1
sphinx-argparse==0.4.0
sphinx-design==0.6.1
sphinx-togglebutton==0.3.2
msgspec
cloudpickle
Expand Down
2 changes: 1 addition & 1 deletion docs/source/api/multimodal/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ vLLM provides experimental support for multi-modal models through the {mod}`vllm
Multi-modal inputs can be passed alongside text and token prompts to [supported models](#supported-mm-models)
via the `multi_modal_data` field in {class}`vllm.inputs.PromptType`.

Looking to add your own multi-modal model? Please follow the instructions listed [here](#enabling-multimodal-inputs).
Looking to add your own multi-modal model? Please follow the instructions listed [here](#supports-multimodal).

## Module Contents

Expand Down
2 changes: 1 addition & 1 deletion docs/source/api/multimodal/inputs.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
## User-facing inputs

```{eval-rst}
.. autodata:: vllm.multimodal.MultiModalDataDict
.. autodata:: vllm.multimodal.inputs.MultiModalDataDict
```

## Internal data structures
Expand Down
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@
"sphinx.ext.autosummary",
"myst_parser",
"sphinxarg.ext",
"sphinx_design",
"sphinx_togglebutton",
]
myst_enable_extensions = [
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contributing/model/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Adding a New Model

This section provides more information on how to integrate a [HuggingFace Transformers](https://github.com/huggingface/transformers) model into vLLM.
This section provides more information on how to integrate a [PyTorch](https://pytorch.org/) model into vLLM.

```{toctree}
:caption: Contents
Expand Down
Loading

0 comments on commit 12664dd

Please sign in to comment.