Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethink testing strategy #1019

Open
3 of 5 tasks
ph-fritsche opened this issue Aug 3, 2022 · 19 comments
Open
3 of 5 tasks

Rethink testing strategy #1019

ph-fritsche opened this issue Aug 3, 2022 · 19 comments
Labels
enhancement New feature or request

Comments

@ph-fritsche
Copy link
Member

ph-fritsche commented Aug 3, 2022

Problem description

  1. We run linting and tests on multiple node versions that we support according to

    user-event/package.json

    Lines 13 to 14 in ea50231

    "engines": {
    "node": ">=12",

    This has no merits.
    Our source code has no dependency on any NodeJS API, so that the result for the source itself will always be the same.
    The tested code is not the distributed code, so that if the different NodeJS versions yield different results, it only flags problems with our testing environment, but don't indicate anything about the build we would distribute from that commit.

  2. We don't test our build. Misconfigurations or bugs in our build tools result in broken builds being published and any potential fix is verified only by manually testing the build.

  3. We only test on Jsdom. We try to be platform agnostic, but out automation doesn't help us to develop software that works as intended in different environments, and the differences between Jsdom and e.g. Chrome are too numerous and wide for relying on manual testing only.

Suggested solution

  • Remove source tests on older node versions.
  • Refactor the tests so that they don't rely on Jest and could be run in different environments.
  • Run the tests in a headless browser during CI.
  • Split up our tests in
    a) those that rely on mocks or test internals and
    b) those that only interact with the DOM and can be observed using only the main export.
  • Add at least a smoke test on different node versions using the build.

Additional context

No response

@ph-fritsche ph-fritsche added enhancement New feature or request needs assessment This needs to be looked at by a team member and removed needs assessment This needs to be looked at by a team member labels Aug 3, 2022
@ph-fritsche
Copy link
Member Author

/cc @nickmccurdy @timdeschryver @MatanBobi

@ph-fritsche ph-fritsche pinned this issue Aug 3, 2022
@nickserv
Copy link
Member

nickserv commented Aug 3, 2022

I was thinking of getting some Testing Library projects running on test frameworks other than Jest. Vitest would probably be a good one to focus on, as it's fast, easy to configure, mostly compatible with Jest, and supports enabling/disabling global injection.

@ph-fritsche
Copy link
Member Author

Can you leverage Vitest to run tests in a headless browser?

@MatanBobi
Copy link
Member

@ph-fritsche great initiative, I highly agree with all of the points.
Small thing about 3, I agree that the differences between JSDOM and Chrome are enormous (don't forget about happy-dom too), but our code is based on the standards and not on a specific engine implementation. If we'll also run tests using happy-dom for examples, some of our tests won't work and that's because happy-dom has some missing implementation.
So what's our suggestion here? Should we run all our tests on a headless browser too? Running our tests on JSDOM is nice because a big part of our users are using JSDOM to run our library.

@ph-fritsche
Copy link
Member Author

I don't know if we should run (some?) tests in happy-dom too, but I think we need to run our tests in a headless browser.
In the end the browser has been our point of reference all the time.

Some things are hard to test in a browser and also don't need to be tested twice - like the tests on correctly wiring the APIs through .setup().

There might be exceptions due to limited implementation in the environment, but in general a test using user-event should at least work both in Jsdom and headless Chrome - so our tests should run in those too.

@ph-fritsche
Copy link
Member Author

ph-fritsche commented Aug 31, 2022

A little update here: I'm trying to make this work without rewriting too much of the tests.

Current approach is Karma+Jasmine with Jest's expect, spyOn and fn so that any tests without mocking could stay the same, as this will make it more likely that the test environment might be reused in other repos.
jasmine types are a mess though and I'm not fond of maintaining those types myself by hand even if they're unlikely to change. So maybe this isn't practical.

Providing the transpiled files is already resolved. Letting Karma manage the files and adding a preprocessor was slow and inflexible. I wrote a little tool using chokidar and rollup (with @swc/core as transpiler) that allows to easily provide the necessary transpiled modules. It allows to use modules with ts paths and modules with node dependencies (replacing those with stubs). A Karma plugin can then update the fileList when files changed and provide them from memory.
As that's too much code out of scope of this library, I started a repo for this at https://github.com/ph-fritsche/toolbox

@nickserv
Copy link
Member

That sounds like an interesting approach. Is your goal to have a more minimal test environment, or specifically ensure Karma and Jasmine are supported? If the former, I think it could be easier to use Vitest, optionally with globals disabled.

@ph-fritsche
Copy link
Member Author

ph-fritsche commented Aug 31, 2022

The goal is to run tests in at least one major browser (preferably Chrome) and in Jsdom.
These tests should be run from and reported to the CLI so that you can just use e.g. the embedded console in Vscode.

Inspecting any test breakpoint in the browser console or walk through it per debugger would be nice but isn't strictly necessary.
I'd like to be able to provide the tests in a manner that someone could just build them, add an adapter and run them against their environment to check for differences in the DOM implementation, but this also isn't strictly necessary.

Neither Karma nor Jasmine is a requirement. The more code I read, the more I start thinking that by the point I'll have understood the necessary configurations and plugin hooks, I could also have written the runner myself.

@nickserv
Copy link
Member

I've had success with using Karma for that sort of thing in the past, but maybe a more modern alternative like Cypress or Playwright would be easier to set up now.

@ph-fritsche
Copy link
Member Author

Karma looks great, as it is built on a plugin system. But there are no types and the dependency injection makes it really hard to identify which interfaces are being used and which way to change some detail would be the "correct" one without causing undocumented side-effects.
E.g. the best solution for adding files without breaking the interaction with other plugins seemed to be monkey-patching the files getter on the filesList implementation 🤦 .

If I understood their documentation correctly, Vitest is running in node with Jsdom or Happy-dom.
Cypress and Playwright only run in the browser and again couple our tests to a specific test environment.

@nickserv
Copy link
Member

I'm aware, but maybe we could write an abstraction layer that could rust tests in something like Vitest and something like Cypress.

@ph-fritsche
Copy link
Member Author

Neither Karma nor Jasmine is a requirement. The more code I read, the more I start thinking that by the point I'll have understood the necessary configurations and plugin hooks, I could also have written the runner myself.

Little update here:
I've dug through a lot of code, tried a few things and I reached a dead end every time. There is a multitude of specific problems but I think it can be boiled down to the following:
Every test framework has some strong paradigm how the test code ends up in the process that executes it. Both sending the code to the process with the target environment (Node or a browser, but not both) and executing the test code are part of the respective test runner and they aren't compatible.
So I've written the tools to run tests in Node and Chromium and it seems to work. There is still some work left to do and if someone wants to help, I'd accelerate documenting it - otherwise I'll try to resolve some remaining questions and then open a PR to adjust our tests and add that test environment to our CI.

@MatanBobi
Copy link
Member

@ph-fritsche Is there something still open here? :) Need a hand?

@ph-fritsche
Copy link
Member Author

@MatanBobi Yes, your help is much appreciated. We need to complete #1091 by fixing or at least explaining the different results when running our tests in Chrome. If we can fix them, we can let the CI step fail on errors and make sure that any other PRs don't cause regression in compatibility with in-browser tests.

@MatanBobi
Copy link
Member

@ph-fritsche It looks like the validate step there succeeded and the logs aren't retained anymore because the run is probably too old. I can't find a way to re-run that action though, am I missing something?

@ph-fritsche
Copy link
Member Author

@MatanBobi (After fixing the linting errors triggered by updated deps,) here's a new report.

@Christian24
Copy link

I am late to the party, but is there something I can do @ph-fritsche?

@artursvonda
Copy link

Hey! Would love to help with this as well. Set up very rudimental example where we run tests against happy-dom environment artursvonda#1 but would love to have some help on how to set this up properly. Initially we'll probably need to set it up in a way that allows these tests to fail on happy-dom as there's work to be done on both sides of user-event/happy-dom until we get to green. I already opened ticket on happy-dom for xpath capricorn86/happy-dom#1125

Let me know if this should be opened as a separate issue.

@Christian24
Copy link

Not sure entirely if this helps but I just stumbled upon this: https://github.com/material-components/material-web/blob/main/testing/harness.ts It seems Google has invested a good bit of effort to simulate clicks and the like. I'd imagine the one for Angular Material is probably even more extensive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants