Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unsupported value: 'stream' does not support true with this model - gpt-o1-mini via litellm #3455

Open
3 tasks done
bodzebod opened this issue Dec 19, 2024 · 4 comments
Open
3 tasks done
Assignees
Labels
area:configuration Relates to configuration options kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority

Comments

@bodzebod
Copy link

Before submitting your bug report

Relevant environment info

- OS: Windows 11
- Continue version: 0.9.246 (pre-release)
- IDE version: vscode 1.96.1
- Model: o1-mini via litellm gateway
- config.json:
  
{
      "model": "o1-mini",
      "title": "o1-mini",
      "apiKey": "XXXX",
      "provider": "openai",
      "apiBase": "https://XXXX",
      "completionOptions": {
        "stream": false
      }
    },

Description

Hi, my company gives me access to o1-mini model through a litellm gateway, providing an openai compatible API.
When I try to set it in continue.dev config, I always get the following error:

400 litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {'error': {'message': "Unsupported value: 'stream' does not support true with this model. Only the default (false) value is supported.", 'type': 'invalid_request_error', 'param': 'stream', 'code': 'unsupported_value'}} Received Model Group=o1-mini Available Model Group Fallbacks=None

I have set the "stream":false in "completionOptions", as you can see in the config. I feel like this parameter is hardcoded to true in the openai plugin. Please can you check? Thanks.

To reproduce

  1. Configure a litellm gateway to o1-mini
  2. Configure config.json for o1-mini model, and set "stream":false in "completionOptions"
  3. Send a question

Log output

Error handling webview message: {
  "msg": {
    "messageId": "f3df299a-92a8-4fe7-8569-9074034095e8",
    "messageType": "llm/streamChat",
    "data": {
      "messages": [
        {
          "role": "user",
          "content": [
            {
              "type": "text",
              "text": "tell me a joke"
            }
          ]
        },
        {
          "role": "assistant",
          "content": ""
        }
      ],
      "title": "o1-mini",
      "completionOptions": {}
    }
  }
}

Error: 400 litellm.BadRequestError: AzureException BadRequestError - Error code: 400 - {'error': {'message': "Unsupported value: 'stream' does not support true with this model. Only the default (false) value is supported.", 'type': 'invalid_request_error', 'param': 'stream', 'code': 'unsupported_value'}}
Received Model Group=o1-mini
Available Model Group Fallbacks=None
@sestinj sestinj self-assigned this Dec 19, 2024
@dosubot dosubot bot added area:configuration Relates to configuration options kind:bug Indicates an unexpected problem or unintended behavior labels Dec 19, 2024
@tdomasat-nr
Copy link

Same issue here, I had to switch to v0.9.236 (pre-release) in order to avoid this error. After some digging, it looks like the breaking change was introduced by v0.9.237 (pre-release).

@sestinj sestinj added priority:medium Indicates medium priority and removed "needs-triage" labels Jan 3, 2025
@sestinj
Copy link
Contributor

sestinj commented Jan 3, 2025

@bodzebod @tdomasat-nr is this still a problem? OpenAI has released streaming support for the o1-series models now

@hourianto
Copy link

@sestinj o1-mini and o1-preview do support streaming, but o1 does not. OpenAI are weird like that.

@bodzebod
Copy link
Author

bodzebod commented Jan 6, 2025

hi @sestinj , it appears we have access to o1-mini model through a litellm proxy in my company.
I've just retried now, and it's working like a charm when I set "stream" to false, thanks! I guess the litellm proxy is perturbating the streaming feature of o1-mini at some point, but it's not essential at the moment.

FYI Here is what I have set in the config:

 {
      "model": "o1-mini",
      "title": "o1-mini",
      "apiKey": "XXXXXXX",
      "provider": "openai",
      "apiBase": "https://XXXXXXX/",
      "contextLength": 128000,
      "completionOptions": {
        "stream": false
      }
    },

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:configuration Relates to configuration options kind:bug Indicates an unexpected problem or unintended behavior priority:medium Indicates medium priority
Projects
None yet
Development

No branches or pull requests

4 participants