Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A vitest example #437

Draft
wants to merge 4 commits into
base: develop
Choose a base branch
from
Draft

A vitest example #437

wants to merge 4 commits into from

Conversation

CocoisBuggy
Copy link
Contributor

All but one test passing, Just thought I would share the outcome so my issue doesn't seem so arbitrary:

Headlines

  • Jest single band tests : 77s - 90s
  • Vitest Threaded pool: 7s - 11s (37.68 seconds of execution time, if you sum all test runner execution times together, but since they run together this is not what users experience)
  • Vitest Forked pool: 12s - 14s Straight downgrade from threads but still quick.

It's a roughly 10x improvement which is frankly not what I had hoped to achieve, the tests are roughly as fast as I'd like them to be when running in a --watch context

Objectives

tests should run faster, and --watch should be a little more usable to tighten up development feedback loops. Ideally, I'd also like the test runners to have much less noise, since we use a LOT of code to scaffold test state that is not actually a part of what particular state is under examination in the test. This slows comprehension and makes the test files very bulky and cumbersome.

Remaining issues

Obviously, one of the tests is still not passing so... that's not good. It's a history test, and as soon as you have forked control flows parallel testing gets a little tricky. The reason I stopped debugging the test was because I could not properly rationalize what the intended behavior was supposed to be. I need to take a longer think about the changelog system we have, but I thought I'd leave this here to explain #420 a little bit - we do not need to pursue this PR, it is not a priority at the moment

I believe the fail() pattern came from jasmine down to jest in the first
instance, though I am not sure. In any event it leads to some strange
testing behavior in which you begin to re-invent the framework from the
inside out.

For this reason it seems the vite team has decided not to make a pass at
including it, since it is technically breaking containment even in the
context of jest testing.

I have done my best to take the time it needs to ascertain which of
these fails() should be asserts, and which should be using the standard
expectations api.
I have maintained the file-sequential execution style (which I will next
attempt to take away.) tests currently run through in ~99 seconds or so.
This is much longer than it needs to be, and so this is where the vitest
optimizations can really have an opportunity to shine.
The test speed is up roughly 10x but my expectation is that it should be
able to come lower than that. The beauty of the way test containment is
implemented here means that there is not much need for isolation - which
is good for simulating our use case because we make use of a lot of
singletons (though thankfully not TOO much shared state)

It was all relatively trivial with the exception of histories. I am
intensely anxious with the implementation that has been used for
document history, as it obscures control flow and does not look like it
will scale - especially if we were ever to want horizontal deployment.

All in all the tests are faster, and can tighten up development feedback
loops BUT the un-strict nature of our existing test runners means that
there are some inconsistencies in execution that I haven't yet uncovered
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant