-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Skunkworks: Upload Lighthouse Scores to Atlas #81
Conversation
Wait... I just added some complexity and need to do one more round of work. Wait to review! Sorry! |
Okay, ready to review now!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the doing this @mmeigs ! most of this looks great
i think we can use more commenting / small explanations for people who are unfamiliar with LHRs. can also use some more DRY code with subtle differences between desktop/mobile runs (see comments below)
excited for the front end portion that displays these reports. downloaded the html version of runs stored in db and looks great 👍
also wanted to verify:
But the long and short of it is the LHCI library is very brittle and does not have a useful API especially for our use case and the reality of Kanopy pods being periodically shut down without warning.
so is the plan to do away with our lhci server, judging from these commits on the snooty workflow branch? or to read from db instead of reading from disk?
|
||
try { | ||
const outputsFile = ( | ||
await readFileAsync('./lhci/manifest.json') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor. can we have this as ./lhci
as an optional env file. seems like we define it in the job step. define in the workflow and use it for 1) lhci output and 2) db upload
we can also define the manifest.json
as a const as it is the default for lhci outputs CMIIW pls
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Matt! Thank you for your work on this. Just a few comments, but nothing blocking
Great question. So, my main skunkworks idea was just creating our own lighthouse app that has a backend that reads directly from the db and a frontend that can literally be whatever graphs we want it to be. I'll admit I didn't get as far as I wanted with the frontend as I wanted, but I did want to get this PR in first so that no matter what, when I find time to create a passable frontend, it will automatically have all the data from whenever we merge this PR. |
Okay, I've added a few more averages to the summary in the documents saved to Atlas. This is so that we can easily have the most important scores averaged ahead of time for when the frontend is ready to ingest them. Feel free to glance at the first run's output in Atlas collection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this LGTM ! thanks for working on this!
is there a front end portion for this? i think it'll be super useful to work on for https://jira.mongodb.org/browse/DOP-4727
This reverts commit f82d8bc.
Context
Before Skunkworks, @seungpark found that the Docs Lighthouse Server was broken. The two of us poked around and bugged Kanopy and tried out some fixes. But the long and short of it is the LHCI library is very brittle and does not have a useful API especially for our use case and the reality of Kanopy pods being periodically shut down without warning.
For Skunkworks, I wanted to make a streamlined version of this system for DOP so we wouldn't have to rely on patching and tricking a system that doesn't seem extensible.
tldr;
I am currently working on a full-stack app that would provide us a UI to view our Lighthouse scores and compare them.
Ideally we will be able to:
That is all still very much in development.
BUT to do any of that, we must have access to our Lighthouse scores and reports.
This custom action will upload our Lighthouse reports and metadata to a new Atlas database
lighthouse
.Currently a very small amount are uploaded now under the collection
pr_reports
.Database
The
lighthouse
database will have two collections for now:pr_reports
andmain_reports
.main_reports
will have the Lighthouse scores for each commit to themain
branch, ie production.pr_reports
will have the Lighthouse scores for each PR commit in Snooty.I will be setting up a TTL deletion strategy on the
pr_reports
collection because these will only be needed for the length of a PR's lifecycle, which is usually quite short. I might set this to be 3 months.Documents
Lighthouse scores are notoriously fickle. Most systems run them multiple times and average the findings. This is the strategy I am going for.
We are running the Lighthouse scores 3 times for each url. (some of this logic is in Snooty). This action will average the overaching five scores together. It will also save the json and html representations of all 3 runs.
This is so: 1. our UI can link runs to the full HTML reports which are useful to look at and use all the data and 2. our UI has access to the JSON format to easily pull data out that we care about and graph it as we see fit.
Finally
This action is going to be implemented first so that we can start saving the data needed for the UI. Once confirmed, I will open a PR to Snooty to add this action to our github workflow! Then, in my free time I will work more on the app that will display these reports.
Reference to an action run: https://github.com/mongodb/snooty/actions/runs/9164674780
Reference to database collection: https://cloud.mongodb.com/v2/5bad1d3d96e82129f16c5df3#/metrics/replicaSet/5d40b1bccf09a2026cbad969/explorer/lighthouse/pr_reports/find