Skip to content

Commit

Permalink
Add information about ollama - document it as an available provider a…
Browse files Browse the repository at this point in the history
…nd provide clearer troubleshooting help. (#1235)

* Add information about ollama.

Thanks to @krassowski for pointing out the issue.  See #840 for additional
suggestions on how to improve the UX for unlisted models; for now this only
addresses clarifying the docs.

* add explicit pip command for installation

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
fperez and pre-commit-ci[bot] authored Feb 10, 2025
1 parent fbc4895 commit 85f0cc7
Showing 1 changed file with 17 additions and 1 deletion.
18 changes: 17 additions & 1 deletion docs/source/users/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,7 +439,22 @@ models.

### Ollama usage

To get started, follow the instructions on the [Ollama website](https://ollama.com/) to set up `ollama` and download the models locally. To select a model, enter the model name in the settings panel, for example `deepseek-coder-v2`.
To get started, follow the instructions on the [Ollama website](https://ollama.com/) to set up `ollama` and download the models locally. To select a model, enter the model name in the settings panel, for example `deepseek-coder-v2`. You can see all locally available models with `ollama list`.

For the Ollama models to be available to JupyterLab-AI, your Ollama server _must_ be running. You can check that this is the case by calling `ollama serve` at the terminal, and should see something like:

```
$ ollama serve
Error: listen tcp 127.0.0.1:11434: bind: address already in use
```

In some platforms (e.g. macOS or Windows), there may also be a graphical user interface or application that lets you start/stop the Ollama server from a menu.

:::{tip}
If you don't see Ollama listed as a model provider in the Jupyter-AI configuration box, despite confirming that your Ollama server is active, you may be missing the [`langchain-ollama` python package](https://pypi.org/project/langchain-ollama/) that is necessary for Jupyter-AI to interface with Ollama, as indicated in the [model providers](#model-providers) section above.

You can install it with `pip install langchain-ollama` (as of Feb'2025 it is not available on conda-forge).
:::

### vLLM usage

Expand Down Expand Up @@ -710,6 +725,7 @@ We currently support the following language model providers:
- `cohere`
- `huggingface_hub`
- `nvidia-chat`
- `ollama`
- `openai`
- `openai-chat`
- `sagemaker-endpoint`
Expand Down

0 comments on commit 85f0cc7

Please sign in to comment.