Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load tester: latency is not being reported anymore #349

Open
avivace opened this issue Jul 4, 2024 · 6 comments
Open

Load tester: latency is not being reported anymore #349

avivace opened this issue Jul 4, 2024 · 6 comments

Comments

@avivace
Copy link

avivace commented Jul 4, 2024

For some reason, mentions of latency were removed from the code and the load-test is not reporting it anymore (despite the examples in the documentation).

This removal was also never mentioned in any of the changelogs I could find here on the github releases of this repository.

A release I could find that still has it is 0.6.0:

tracks, latency, oooRate, dropRate := livekit_cli.GetSummary(testers)

Any information on this?

@rektdeckard
Copy link
Contributor

rektdeckard commented Jul 8, 2024

It's unclear why latency reporting was removed, but we can look into restoring this functionality 👍

@avivace
Copy link
Author

avivace commented Jul 9, 2024

thanks a lot @rektdeckard for taking a look at this!

@davidzhao
Copy link
Member

I think the previous method of reporting latency was a bit hacky (encoding publishing time in the payload). In order to do this correctly, we should look at the sender reports

@avivace
Copy link
Author

avivace commented Jul 28, 2024

I think the previous method of reporting latency was a bit hacky (encoding publishing time in the payload). In order to do this correctly, we should look at the sender reports

Hi @davidzhao , if you're not already working on this internally, could you point me to how was it done before and expand on how should it be done now? I could try to take a look and send a draft PR

@rektdeckard
Copy link
Contributor

@avivace We're exploring what accurate performance tracing would entail, but suffice it to say that it touches several components. Anything quick would likely be inaccurate (the previous metrics included local processing times and were a simple averaging of all tracks).

Can you tell us a bit more about your use case here? Are you checking coarse e2e latency just as a smoke test, or are you relying on it more concretely? What other metrics would you like to see in an idea case?

@avivace
Copy link
Author

avivace commented Sep 18, 2024

@avivace We're exploring what accurate performance tracing would entail, but suffice it to say that it touches several components. Anything quick would likely be inaccurate (the previous metrics included local processing times and were a simple averaging of all tracks).

Can you tell us a bit more about your use case here? Are you checking coarse e2e latency just as a smoke test, or are you relying on it more concretely? What other metrics would you like to see in an idea case?

@rektdeckard thanks a lot for looking into this. To be honest, I'd say that at the moment we want this metric for e.g. a measure of the infrastructure health/status, but we may also want to rely on it for specific setups in which we may want to take an action if latency becomes bigger than a threshold (e.g. unsubscribing/muting when people are on the same track but also physically in the same room)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants