-
-
Notifications
You must be signed in to change notification settings - Fork 691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for a streaming value with #server. #1284
Comments
I'm open to implementing streaming server functions, but I've heard several different proposals from people imagining that they mean several different things, which makes me cautious. For example, some people assume that a server function that returns a This kind of use case can currently be done by creating an API route in your server framework of choice that returns a streaming response. It could then be consumed with Integrating |
Hello, Forgive the ambiguity. The request was for a stream to be consumed within a Suspense tag. You don't even have to automate the consumption. You could implement a server fn that returns an impl Stream and allow the user to handle the sink internally within suspense. |
Could you give an example of how that would look and how it would behave? The way I'd typically consume a stream is let mut stream = some_stream;
while let Some(data) = stream.next().await {
// do something with data
} How does that translate into
Feel free also to create an imaginary |
Hello, I initially was thinking of streaming an image, such that chunks of vec are returned. Think of images that render partially or even better, videos that are streamed. I don't actually think that Suspense should automatically resolve the stream, but the implementation should be for the user. For example, would it be possible to implement a render_stream_chunked and render_stream_whole fn that would control the flow within the Suspense tag? This could also be handled via a signal, with the while let Some... This would mean that the user would control both points one and two that you have mentioned before. If you wanted to implement it via a tag, maybe: <Stream fallback=||() completed=bool Action=(server_fn -> impl stream) map=some_signal >
{move ||
view! {cx,
<img src=base_64_enc_uri(same_signal)
}
}
</Stream> Could it be initially trialed by allowing #server functions to return streams, and letting users handle it? I ran into problems trying to return an impl stream from a server fn, due to the serialisation requirement. |
Another usecase of stream to consider that I would really love to have handled are LLM generations - in that case, I want the fallback only before the first message, and then show some computed result, the way a chatgpt stream looks |
My use case for streaming server functions would be search results which I would like to present as they come in (one by one) instead of having to wait for the full list. This could of course be done, as you say, using websockets or SSE; but when everything else can be done with server functions it feels sort of strange to step outside of that world and the utility functionality I've built specifically for server functions. |
Another use case is some sort of progressive enhancement where the server can give a "rough estimate" nearly instantly, then progressively update to a better value over the next few seconds. |
Hello, If this is any help, you could use futures::channels which already implement stream within the #server function, like the following: #[post("/reportstream")]
pub async fn report(payload: web::Payload) -> impl Responder {
let (mut tx, rx) = futures::channel::mpsc::unbounded::<
std::result::Result<actix_web::web::Bytes, std::io::Error>,
>();
...
tokio::task::spawn(async move {
/// some call made earlier bound to result
while let Some(message) = result.next().await {
let message = match message {
Ok(message) => {
if let Some(message) = message.choices[0].delta.content.clone() {
message.clone()
} else {
String::new()
}
}
Err(e) => e.to_string(),
};
let _ = tx.start_send(Ok(Bytes::from(message)));
}
});
HttpResponse::Ok().streaming(rx)
} The channels already implement stream. You would have to make the macro a wrapper around the channel, although I do not know the specifics of your current implementation |
Streaming responses from server functions added in #2158. |
Is your feature request related to a problem? Please describe.
No
Describe the solution you'd like
Currently, #server macro does not allow for a return of a stream type, due to the requirement to implement Serialize.
Take the following server_fn as an example:
A solution would be to stream the output of the return value.
Let's say that I had a rather large Vec<_> to return, a stream return value would be very useful in this case.
The text was updated successfully, but these errors were encountered: