Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x11: send end of previous active window #31

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions watchers/src/report_client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,18 +41,18 @@ impl ReportClient {
Fut: Future<Output = Result<T, E>>,
E: std::error::Error + Send + Sync + 'static,
{
for (attempt, &secs) in [1, 2].iter().enumerate() {
for (attempt, secs) in [0.01, 0.1, 1., 2.].iter().enumerate() {
match f().await {
Ok(val) => return Ok(val),
Err(e)
if e.to_string()
.contains("tcp connect error: Connection refused") =>
{
warn!("Failed to connect on attempt #{attempt}, retrying: {}", e);

tokio::time::sleep(tokio::time::Duration::from_secs(secs)).await;
Ok(val) => {
if attempt > 0 {
debug!("OK at attempt #{}", attempt + 1);
}
return Ok(val);
}
Err(e) => {
warn!("Failed on attempt #{}, retrying in {:.1}s: {}", attempt + 1, secs, e);
tokio::time::sleep(tokio::time::Duration::from_secs_f64(*secs)).await;
}
Err(e) => return Err(e),
}
}

Expand Down
74 changes: 43 additions & 31 deletions watchers/src/watchers/x11_connection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -67,33 +67,38 @@ impl X11Client {
})
}

pub fn active_window_data(&mut self) -> anyhow::Result<WindowData> {
pub fn active_window_data(&mut self) -> anyhow::Result<Option<WindowData>> {
self.execute_with_reconnect(|client| {
let focus: Window = client.find_active_window()?;

let name = client.get_property(
focus,
client.intern_atom("_NET_WM_NAME")?,
"_NET_WM_NAME",
client.intern_atom("UTF8_STRING")?,
u32::MAX,
)?;
let class = client.get_property(
focus,
AtomEnum::WM_CLASS.into(),
"WM_CLASS",
AtomEnum::STRING.into(),
u32::MAX,
)?;

let title = str::from_utf8(&name.value).with_context(|| "Invalid title UTF")?;
let (instance, class) = parse_wm_class(&class)?;

Ok(WindowData {
title: title.to_string(),
app_id: class,
wm_instance: instance,
})
let focus = client.find_active_window()?;

match focus {
Some(window) => {
let name = client.get_property(
window,
client.intern_atom("_NET_WM_NAME")?,
"_NET_WM_NAME",
client.intern_atom("UTF8_STRING")?,
u32::MAX,
)?;
let class = client.get_property(
window,
AtomEnum::WM_CLASS.into(),
"WM_CLASS",
AtomEnum::STRING.into(),
u32::MAX,
)?;

let title = str::from_utf8(&name.value).with_context(|| "Invalid title UTF")?;
let (instance, class) = parse_wm_class(&class)?;

Ok(Some(WindowData {
title: title.to_string(),
app_id: class,
wm_instance: instance,
}))
}
None => Ok(None),
}
})
}

Expand Down Expand Up @@ -122,7 +127,7 @@ impl X11Client {
.atom)
}

fn find_active_window(&self) -> anyhow::Result<Window> {
fn find_active_window(&self) -> anyhow::Result<Option<Window>> {
let window: Atom = AtomEnum::WINDOW.into();
let net_active_window = self.intern_atom("_NET_ACTIVE_WINDOW")?;
let active_window = self.get_property(
Expand All @@ -134,20 +139,27 @@ impl X11Client {
)?;

if active_window.format == 32 && active_window.length == 1 {
active_window
let window_id = active_window
.value32()
.ok_or(anyhow!("Invalid message. Expected value with format = 32"))?
.next()
.ok_or(anyhow!("Active window is not found"))
.ok_or(anyhow!("Active window is not found"))?;

// Check if the window_id is 0 (no active window)
if window_id == 0 {
return Ok(None);
}

Ok(Some(window_id))
} else {
// Query the input focus
Ok(self
Ok(Some(self
.connection
.get_input_focus()
.with_context(|| "Failed to get input focus")?
.reply()
.with_context(|| "Failed to read input focus from reply")?
.focus)
.focus))
}
}
}
Expand Down
29 changes: 23 additions & 6 deletions watchers/src/watchers/x11_window.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,35 @@ impl WindowWatcher {
async fn send_active_window(&mut self, client: &ReportClient) -> anyhow::Result<()> {
let data = self.client.active_window_data()?;

if data.app_id != self.last_app_id || data.title != self.last_title || data.wm_instance != self.last_wm_instance {
let (app_id, title, wm_instance) = match data {
Some(window_data) => (
window_data.app_id,
window_data.title,
window_data.wm_instance,
),
None => {
// No active window, set all values to "aw-none"
("aw-none".to_string(), "aw-none".to_string(), "aw-none".to_string())
}
};

if app_id != self.last_app_id || title != self.last_title || wm_instance != self.last_wm_instance {
debug!(
r#"Changed window app_id="{}", title="{}", wm_instance="{}""#,
data.app_id, data.title, data.wm_instance
app_id, title, wm_instance
);
self.last_app_id = data.app_id.clone();
self.last_title = data.title.clone();
self.last_wm_instance = data.wm_instance.clone();
client
.send_active_window_with_instance(&self.last_app_id, &self.last_title, Some(&self.last_wm_instance))
.await
.with_context(|| "Failed to send heartbeat for previous window")?;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lead to connection closed before message completed. Waiting just 0.01 fixes this issue

I think it would better be delayed for a millisecond here than in run_with_retries. The retry code is more of an exception, while you're proposing a regular routine.

The way it's done now is more of a simplification and exists in the original code as well, which is supposed to be a fairly good measure since changing window titles every 1 second is not typical.

The most exact approach would be something like idle, but the timing value may become less trivial.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change would also be incomplete because it ideally needs to encompass all the watchers for all environments. But doing so for reactive KWin and Wayland watchers is not trivial and needs as complex strategy as idle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change would also be incomplete because it ideally needs to encompass all the watchers for all environments

Isn't it better to at least have it for x11 than for no watcher?

The most exact approach would be something like idle, but the timing value may become less trivial.

I don't understand, why do we need idle for reporting title change events? I'd guess if we wanted it to be more accurate, it would help to have a queue that we can dispatch asyncly to, and a worker task sends those entries (in the work queue) in sync fashion (one by one, keeping order) to the server. Since currently, if we have to retry for 2s, it will delay title events that were generated during those 2s as well (I think?)

However, I am not sure if heartbeat API allows us to set custom timestamp? E.g. if the worker wants to send entries from ~2min ago, can it send the "end timestamp" for that heartbeat, or will it always be "now"?. If the latter, IG the only way for the worker to accurately note those down "afterward" is by using the "insert event" API in case another "title change" event already follows after it (e.g. the queue is not empty after we took the current item, for end timestamp it would likely have to peek (inspect without taking) the next event from the queue)

Ah.. is it that we currently don't react on title change events, but only check every ~1s? In that case, how about we add an event handler?
https://unix.stackexchange.com/a/334293

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need idle for reporting title change events?

No, I meant a complicated timing tracking, more complicated than the basic heartbeats. Idle watchers use such tracking.

if we have to retry for 2s, it will delay title events that were generated during those 2s as well

That's not a problem because this is an exceptional situation which is not supposed to happen. Such a disconnect happens mostly on start.

In that case, how about we add an event handler?
unix.stackexchange.com/a/334293

I think this may be a good idea and better than a more complicated time tracking. Wayland and KWin watchers are already reactive, and yes, I noticed once too that X11 can seemingly do that as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would better be delayed for a millisecond here than in run_with_retries. The retry code is more of an exception, while you're proposing a regular routine.

The problem though is that I don't know what causes connection closed before message completed, it seems like a bug to me in one of the used libraries. So we can't be confident that 0.01s are enough (though in my tests it was).

So because of this, I think we'd need to write a new "run_with_retries", which seems suboptimal because of code repetition. Maybe we could refactor run_with_retries, such that we have run_with_retries2(request, delays: list[float]), and run_with_retries then calls that with sensible default values, while this x11 logic can use run_with_retries2.

What do you think about this approach?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think run_with_retries needs to deal with imprecise reporting, its responsibility is only server's connection and it's been a mere fail-safe for the unavailable server in some edge cases. However, I much more like your idea about notifications from X11, I think I even had a thought about it myself but I didn't figure out if it's possible (nor tried TBH).

The problem though is that I don't know what causes connection closed before message completed, it seems like a bug to me in one of the used libraries. So we can't be confident that 0.01s are enough (though in my tests it was).

I would speculate that the server can't insert an event to the same place with the same time, so any minimal difference is sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would speculate that the server can't insert an event to the same place with the same time, so any minimal difference is sufficient.

I believe that the connection closed before message completed error is client-side, e.g. the server does not even see that connection.

What I found is this:
hyperium/hyper#2136 (comment)

Though I am not sure if it applies here, since we await the response before starting another one. Maybe the FIN get's send after the HTTP Response is received, I am not sure

self.last_app_id = app_id.clone();
self.last_title = title.clone();
self.last_wm_instance = wm_instance.clone();

}

client
.send_active_window_with_instance(&self.last_app_id, &self.last_title, Some(&self.last_wm_instance))
.send_active_window_with_instance(&app_id, &title, Some(&wm_instance))
.await
.with_context(|| "Failed to send heartbeat for active window")
}
Expand Down