-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
x11: send end of previous active window #31
base: main
Are you sure you want to change the base?
Conversation
ca7d348
to
a20f09e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your proposal!
I think this needs to be rather based against main branch since it's an independent fix.
client | ||
.send_active_window_with_instance(&self.last_app_id, &self.last_title, Some(&self.last_wm_instance)) | ||
.await | ||
.with_context(|| "Failed to send heartbeat for previous window")?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lead to
connection closed before message completed
. Waiting just 0.01 fixes this issue
I think it would better be delayed for a millisecond here than in run_with_retries
. The retry code is more of an exception, while you're proposing a regular routine.
The way it's done now is more of a simplification and exists in the original code as well, which is supposed to be a fairly good measure since changing window titles every 1 second is not typical.
The most exact approach would be something like idle, but the timing value may become less trivial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change would also be incomplete because it ideally needs to encompass all the watchers for all environments. But doing so for reactive KWin and Wayland watchers is not trivial and needs as complex strategy as idle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change would also be incomplete because it ideally needs to encompass all the watchers for all environments
Isn't it better to at least have it for x11 than for no watcher?
The most exact approach would be something like idle, but the timing value may become less trivial.
I don't understand, why do we need idle for reporting title change events? I'd guess if we wanted it to be more accurate, it would help to have a queue that we can dispatch asyncly to, and a worker task sends those entries (in the work queue) in sync fashion (one by one, keeping order) to the server. Since currently, if we have to retry for 2s, it will delay title events that were generated during those 2s as well (I think?)
However, I am not sure if heartbeat API allows us to set custom timestamp? E.g. if the worker wants to send entries from ~2min ago, can it send the "end timestamp" for that heartbeat, or will it always be "now"?. If the latter, IG the only way for the worker to accurately note those down "afterward" is by using the "insert event" API in case another "title change" event already follows after it (e.g. the queue is not empty after we took the current item, for end timestamp it would likely have to peek (inspect without taking) the next event from the queue)
Ah.. is it that we currently don't react on title change events, but only check every ~1s? In that case, how about we add an event handler?
https://unix.stackexchange.com/a/334293
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need idle for reporting title change events?
No, I meant a complicated timing tracking, more complicated than the basic heartbeats. Idle watchers use such tracking.
if we have to retry for 2s, it will delay title events that were generated during those 2s as well
That's not a problem because this is an exceptional situation which is not supposed to happen. Such a disconnect happens mostly on start.
In that case, how about we add an event handler?
unix.stackexchange.com/a/334293
I think this may be a good idea and better than a more complicated time tracking. Wayland and KWin watchers are already reactive, and yes, I noticed once too that X11 can seemingly do that as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would better be delayed for a millisecond here than in run_with_retries. The retry code is more of an exception, while you're proposing a regular routine.
The problem though is that I don't know what causes connection closed before message completed
, it seems like a bug to me in one of the used libraries. So we can't be confident that 0.01s are enough (though in my tests it was).
So because of this, I think we'd need to write a new "run_with_retries", which seems suboptimal because of code repetition. Maybe we could refactor run_with_retries, such that we have run_with_retries2(request, delays: list[float]), and run_with_retries then calls that with sensible default values, while this x11 logic can use run_with_retries2.
What do you think about this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think run_with_retries
needs to deal with imprecise reporting, its responsibility is only server's connection and it's been a mere fail-safe for the unavailable server in some edge cases. However, I much more like your idea about notifications from X11, I think I even had a thought about it myself but I didn't figure out if it's possible (nor tried TBH).
The problem though is that I don't know what causes connection closed before message completed, it seems like a bug to me in one of the used libraries. So we can't be confident that 0.01s are enough (though in my tests it was).
I would speculate that the server can't insert an event to the same place with the same time, so any minimal difference is sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would speculate that the server can't insert an event to the same place with the same time, so any minimal difference is sufficient.
I believe that the connection closed before message completed
error is client-side, e.g. the server does not even see that connection.
What I found is this:
hyperium/hyper#2136 (comment)
Though I am not sure if it applies here, since we await the response before starting another one. Maybe the FIN get's send after the HTTP Response is received, I am not sure
Fixes #29
I also adjusted the retry logic, since for some reason two consecutive calls to
ReportClient.send_active_window_with_instance
->aw_client_rust::AwClient.heartbeat
lead toconnection closed before message completed
. Waiting just 0.01 fixes this issue