You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cancelling and restarting a use_future task leaks connections and breaks down if it happens faster than about 1 time/second. This means users double-clicking a button might leak connections, and users spamming the button will hang the functionality.
We start from the fullstack text stream example. We will change it as follows:.
On the server we've got a TextStream server function. We make a small change here: we check the tx.send call for error (if it's cancelled) and print a warning message.
On the front-end we've consumer that opens the stream and reads from it with use_future.
The button is changed so it now cancels the old future and restarts a new one.
on front-end, read text stream inside use_future, and make the button cancel and restart the use_future
click slowly; observe cancellations in the browser tools network tab
spam click the button (a few clicks per second should do)
connections are leaked
if you click fast enough, whole page stops working
Expected behavior
all except last one connections closed, regardless of how much the button is being spammed, or how many hot-reloads are being triggered in a short time
Screenshots
We click the button slowly (one click per second max), cancellation works:
If we spam the button with the mouse, it doesn't cancel the requests anymore:
Instead, they are closed 5 min later when I close the browser tab:
Environment:
Dioxus version: 0.6.0-rc.0
Rust version: 1.82
OS info: windows 11 firefox 133
App platform: web + fullstac
Edit - Workaround (sort of)
Use broadcast channels to send cancellation message into all the older futures. Use another async channel to get a confirmation back that it indeed got the first message. Only then, restart the future. Not doing both of these results in leaks/breakage.
Spam clicking the button now gets hung up on the recv() call which unblocks only when the old task is dead. Is this blocked async closure leaked forever? Still, better than leaking whole streams.
Anyway, this solution breaks down when you increase the server stream message interval to 5 seconds: the connection is closed in the browser only after the server closes it, and it's closed on the server only after failing to write -- so we end up leaking connections until there is more traffic on the old connections that we wanted cancelled.
By "abort handle" I eventually found out futures::stream::Abortable which I tried but it didn't abort the connections.
I also tried:
tokio::task::spawn, won't compile because signals aren't Send
tokio::task::spawn_local, after adding rt feature, running it panics with "spawn_local called from outside of a task::LocalSet or LocalRuntime". Couldn't figure out what to do from there
async_std::task::spawn_local - the future can't access dioxus context. Also, cancel doesn't kill all the closures/connections either.
The stream does cancel properly on desktop + fullstack. I think the gloo implementation of server_fn returns a future that is not cancel safe somewhere. Bumping the version of the server function crate to 0.7 seems to fix this issue. You can test the fix with:
Problem
Cancelling and restarting a
use_future
task leaks connections and breaks down if it happens faster than about 1 time/second. This means users double-clicking a button might leak connections, and users spamming the button will hang the functionality.We start from the fullstack text stream example. We will change it as follows:.
On the server we've got a TextStream server function. We make a small change here: we check the
tx.send
call for error (if it's cancelled) and print a warning message.On the front-end we've consumer that opens the stream and reads from it with
use_future
.The button is changed so it now cancels the old future and restarts a new one.
Steps To Reproduce
Steps to reproduce the behavior:
use_future
, and make the button cancel and restart theuse_future
Expected behavior
all except last one connections closed, regardless of how much the button is being spammed, or how many hot-reloads are being triggered in a short time
Screenshots
We click the button slowly (one click per second max), cancellation works:
If we spam the button with the mouse, it doesn't cancel the requests anymore:
Instead, they are closed 5 min later when I close the browser tab:
Environment:
Edit - Workaround (sort of)
Use broadcast channels to send cancellation message into all the older futures. Use another async channel to get a confirmation back that it indeed got the first message. Only then, restart the future. Not doing both of these results in leaks/breakage.
Spam clicking the button now gets hung up on the
recv()
call which unblocks only when the old task is dead. Is this blocked async closure leaked forever? Still, better than leaking whole streams.Anyway, this solution breaks down when you increase the server stream message interval to 5 seconds: the connection is closed in the browser only after the server closes it, and it's closed on the server only after failing to write -- so we end up leaking connections until there is more traffic on the old connections that we wanted cancelled.
Edit2 - another problem
Even with the crutch above, it's still very easy to leak connections.
All you have to do is have hot-reloading on and spam "save" in a source file:
Edit 3 - conflicting docstrings
So I'm running
let mut fut = use_future(...); fut.cancel()
The cancel method has docstring
/// Forcefully cancel a future
.It then calls
Task::cancel
which has docstring/// This does not abort the task, so you'll want to wrap it in an abort handle if that's important to you
-- doesn't sound like it does what the previous comment says?By "abort handle" I eventually found out futures::stream::Abortable which I tried but it didn't abort the connections.
I also tried:
tokio::task::spawn
, won't compile because signals aren't Sendtokio::task::spawn_local
, after addingrt
feature, running it panics with "spawn_local
called from outside of atask::LocalSet
or LocalRuntime". Couldn't figure out what to do from thereasync_std::task::spawn_local
- the future can't access dioxus context. Also,cancel
doesn't kill all the closures/connections either.wasm_bindgen_futures::spawn_local
-- doesn't even have a cancel/abort handle method.The text was updated successfully, but these errors were encountered: