Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeout problem #29

Open
saurabh0vipin opened this issue Jan 24, 2019 · 11 comments
Open

Timeout problem #29

saurabh0vipin opened this issue Jan 24, 2019 · 11 comments

Comments

@saurabh0vipin
Copy link

getting this error ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=9000): Read timed out. (read timeout=30)
when tried to execute
import corenlp
text = "Chris wrote a simple sentence that he parsed with Stanford CoreNLP."
with corenlp.CoreNLPClient(annotators="tokenize ssplit pos lemma ner depparse".split()) as client:
ann = client.annotate(text)
sentence = ann.sentence[0]
assert corenlp.to_text(sentence) == text
print(sentence.text)

@AgoloKushagraGoyal
Copy link

I faced this as well. Please suggest a fix!

@qipeng
Copy link

qipeng commented Jan 28, 2019

This could happen if you didn't start the CoreNLP server before calling it from Python. For help with starting the CoreNLP server, see: https://stanfordnlp.github.io/CoreNLP/corenlp-server.html

@saurabh0vipin
Copy link
Author

started server before calling it on the same port number tried both with large time out like 1500000 and without mannual timeout taking as default value.

@arunchaganty
Copy link
Contributor

arunchaganty commented Jan 29, 2019 via email

@greasystrangler
Copy link
Contributor

greasystrangler commented Jan 31, 2019

getting this error ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=9000): Read timed out. (read timeout=30)
when tried to execute
import corenlp
text = "Chris wrote a simple sentence that he parsed with Stanford CoreNLP."
with corenlp.CoreNLPClient(annotators="tokenize ssplit pos lemma ner depparse".split()) as client:
ann = client.annotate(text)
sentence = ann.sentence[0]
assert corenlp.to_text(sentence) == text
print(sentence.text)

Set the timeout parameter in the CoreNLPClient constructor. i.e.

corenlp.CoreNLPClient(timeout=30000,annotators=...)

The timeout you are experiencing is the client http request, not the server.

Note that confusingly the timeout is in 2ms units! The value you provide is multiplied by 2 and then divided by 1000 before being passed to Python request module....

timeout=(self.timeout*2)/1000)

So in the above example using a timeout of 30000 means 60 seconds

edited - not annotate function

@arunchaganty
Copy link
Contributor

arunchaganty commented Feb 7, 2019 via email

@abhiram11
Copy link

This could happen if you didn't start the CoreNLP server before calling it from Python. For help with starting the CoreNLP server, see: https://stanfordnlp.github.io/CoreNLP/corenlp-server.html

After doing this, you also have to open this in a tab as well : http://localhost:9000/
This worked for me.

@LiuYuLOL
Copy link

LiuYuLOL commented Apr 25, 2020

Hmmm... it's a bit hard to debug with just the information you've provided: (a) do you have the latest version of the library (3.9.2)? (b) When running, can you set "be_quiet=False" on the client, e.g. with corenlp.CoreNLPClient(annotators="tokenize ssplit pos lemma ner depparse".split(), be_quiet=False) as client and share what messages you get from the Java process?

On Mon, Jan 28, 2019 at 10:34 AM Saurabh Verma @.***> wrote: started server before calling it on the same port number tried both with large time out like 1500000 and without mannual timeout taking as default value. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#29 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AAFOyZGq9coIpRATaqtLHPA-o-HHqVZqks5vH0KlgaJpZM4aQaUZ .
-- Arun Tejasvi Chaganty http://arun.chagantys.org/

I still meet this issue, but in a different situation. With setting be_quiet=False, I printed the details:

[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
....
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0:0:0:0:0:0:0:0:3220

After waiting, I got the following error with Traceback:

Traceback (most recent call last):
doc = client.annotate(text)
r = self._request(text.encode('utf-8'), request_properties, **kwargs)
self.ensure_alive()
raise PermanentlyFailedException("Timed out waiting for service to come alive.")
stanfordnlp.server.client.PermanentlyFailedException: Timed out waiting for service to come alive.

It seems the request has not responded. How can I fix it? Any ideas?

@AngledLuffa
Copy link
Contributor

We have no idea what you're doing, so probably hard to get good suggestions. However, it looks like you're running the server on port 3220. Are you also setting the client to use that port?

@LiuYuLOL
Copy link

LiuYuLOL commented Apr 26, 2020

We have no idea what you're doing, so probably hard to get good suggestions. However, it looks like you're running the server on port 3220. Are you also setting the client to use that port?

I try to use CoreNLP to do some annotations under the python environment using pycharm.

Yes. Port 3220 is just randomly selected. I also tested with default port 9000, but doesn't work. Below is my code.

    with CoreNLPClient(
            endpoint='http://localhost:3220',
            memory='4G',
            annotators=['tokenize', 'ssplit', 'pos', 'lemma', 'ner'],
            timeout=50000, be_quiet=False) as client:
        text = 'this is a text file.'
        doc = client.annotate(text)

I check the listening ports, using netstat -l and both ports are free.

I also tested on stanza, but it still returns the same error.

@LiuYuLOL
Copy link

LiuYuLOL commented Apr 26, 2020

A good idea is to check the permissions as stanfordnlp/stanza#245 (comment)

Any idea to see my permissions on the server?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants