Skip to content

Why is BaseChatOpenAI streaming the "get_final_completion" as a chunk? #29640

Answered by ccurme
marcammann asked this question in Q&A
Discussion options

You must be logged in to vote

Thanks for raising this. I believe the bug was resolved in #29649.

There are a few options for how we stream structured output with OpenAI:

  1. Stream chunks with json string content, with a final chunk containing the parsed Pydantic object. Obtain this parsed object using get_final_completion. This is what is implemented now and what is demonstrated in OpenAI's docs. The downside of this as you found is it erroneously doubles tool calls when we simultaneously stream tool calls + structured output (this particular bug is now fixed).

  2. Stream chunks with json string content, with a final chunk containing the parsed Pydantic object. Obtain this parsed object during the stream from the content…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by marcammann
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants