You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm one of the developers of an online IDE (Utopia) which is by its very nature quite a heavyweight webapp (it needs to handle the expected IDE behaviours and render the React application being edited all whilst providing ways to make changes to that application via a form of canvas). One of our primary requirements is of course for the webapp to be highly performant, but we've always struggled to find a way to automate the testing of performance in a meaningful and reliable way. Our best attempt so far has been to use puppeteer scripts to capture timed recordings of common interactions and compare those recordings over time, but even when running those tests on dedicated hardware we have found that the variance is so high as to make the comparison almost meaningless.
As I understand it, the "Future Goals" section of this proposal (specifically the CPU utilisation, and potentially other hardware resource consumption) would provide us with a way to measure the impact of those common interactions in a way that would allow us to compare the measurements between an open PR and the currently deployed production version of our application in a much more reliable way.
Our likely approach to using this would be (upon the opening of a new PR in our repo):
Load the production version of the application (via a puppeteer script)
Measure the CPU load whilst the application is idle
Perform a set of common interactions, taking measurements of the CPU load at various points during those interactions to calculate the effect of the interactions on it
Deploy the PR branch's code to a staging environment
Repeat the same measurements against that environment
Compare and chart them, adding the chart to the PR
The text was updated successfully, but these errors were encountered:
Thanks @Rheeseyb for this use case description. We'll look at this use case more closely once the v1 has solidified and will reach out to you for more information as needed.
kenchris
changed the title
Use Case: Automated performance regression testing in Utopia
Use-case: Automated performance regression testing in Utopia
Nov 9, 2023
Hi,
I'm one of the developers of an online IDE (Utopia) which is by its very nature quite a heavyweight webapp (it needs to handle the expected IDE behaviours and render the React application being edited all whilst providing ways to make changes to that application via a form of canvas). One of our primary requirements is of course for the webapp to be highly performant, but we've always struggled to find a way to automate the testing of performance in a meaningful and reliable way. Our best attempt so far has been to use puppeteer scripts to capture timed recordings of common interactions and compare those recordings over time, but even when running those tests on dedicated hardware we have found that the variance is so high as to make the comparison almost meaningless.
As I understand it, the "Future Goals" section of this proposal (specifically the CPU utilisation, and potentially other hardware resource consumption) would provide us with a way to measure the impact of those common interactions in a way that would allow us to compare the measurements between an open PR and the currently deployed production version of our application in a much more reliable way.
Our likely approach to using this would be (upon the opening of a new PR in our repo):
The text was updated successfully, but these errors were encountered: