-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Never Ending Generation Loop with Ollama and Searxng #1271
Comments
Welcome @IMJONEZZ What LLM model are you using? If you're running with Docker Compose, can you try the NextJS app on localhost:3000 and let us know if the issue persists? |
Hey Elisha, thanks for the reply.
This issue persists across an array of models, and happens more frequently
than the stopping criteria does.
I've tested with:
- qwen2.5:3b,7b,14b,32b,70b
- llama3.3:70b
- deepseek-r1:7b,8b,32b,70b,617b
The quality of the report before the stopping criteria obvs goes up with
the size of the model, however whether the stopping criteria happens does
not. I find myself rerunning the 70 and 617 billion parameter models just
as many times as the smaller ones and wonder if the problem is in the setup
as I'm not seeing many others mention this issue.
To trying the NextJS app, if i submit a query there, nothing happens, it
just hangs. It only works for me on port 8000.
…On Thu, Mar 20, 2025, 2:50 AM Elisha Kramer ***@***.***> wrote:
Welcome @IMJONEZZ <https://github.com/IMJONEZZ>
What LLM model are you using?
We've done most of our testing with the OpenAI LLM
Can you try the NextJS app on localhost:3000 and let us know if the issue
persists?
—
Reply to this email directly, view it on GitHub
<#1271 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALEZCMBI3SJQHZ5NVZFYJ3T2VJ6M7AVCNFSM6AAAAABZGSKG6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMZZGYYTSOBVGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
[image: ElishaKay]*ElishaKay* left a comment
(assafelovic/gpt-researcher#1271)
<#1271 (comment)>
Welcome @IMJONEZZ <https://github.com/IMJONEZZ>
What LLM model are you using?
We've done most of our testing with the OpenAI LLM
Can you try the NextJS app on localhost:3000 and let us know if the issue
persists?
—
Reply to this email directly, view it on GitHub
<#1271 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALEZCMBI3SJQHZ5NVZFYJ3T2VJ6M7AVCNFSM6AAAAABZGSKG6OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMZZGYYTSOBVGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I'm getting the same issue. Weirdly it just repeats the References and Apendix sections for me. Only tried port 8000 so far. |
Strange - please send over the logs - see if you can pinpoint the function or prompt responsible for that stage of the report. Also, what type of report are you running? Try regular report and multi_agents. For the NextJS App, check within the Network Tab the GPTR API URL that the frontend is querying - what domain is it querying for the Websocket connection? |
Sorry for the late response. Okay I've tried it a bunch of times. There are a few times it actually finishes, but most of the time the logs look like this. I'm just running everything default. Summary - Short and fast (~2 min). Seems like right when the report ends it starts another report but for some reason just repeats the Conclusion and References. This log is from a clean start and just running one prompt on the port 8000 site and it continues outputting the Conclusion and References till I refresh the page. |
Describe the bug
Everything seems to be great in that the .env works, the docker-compose builds successfully, I can visit the app correctly, and generation seems to be really high-quality until we hit the end. At that point, it just continues generating "End of Document. This document serves as a comprehensive guide for blah blah blah" never-endingly. Any help would be amazing.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expected the websocket to send a close() signal after it got the <|endoftext|> tag or something similar.
Screenshots
Desktop (please complete the following information):
Additional context
.env
final docker logs output:
gpt-researcher-1 | INFO: [timestamp] Writing report for 'query'...
The text was updated successfully, but these errors were encountered: