Skip to content

A vitest example #437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: develop
Choose a base branch
from
Open

A vitest example #437

wants to merge 23 commits into from

Conversation

CocoisBuggy
Copy link
Contributor

All but one test passing, Just thought I would share the outcome so my issue doesn't seem so arbitrary:

Headlines

  • Jest single band tests : 77s - 90s
  • Vitest Threaded pool: 7s - 11s (37.68 seconds of execution time, if you sum all test runner execution times together, but since they run together this is not what users experience)
  • Vitest Forked pool: 12s - 14s Straight downgrade from threads but still quick.

It's a roughly 10x improvement which is frankly not what I had hoped to achieve, the tests are roughly as fast as I'd like them to be when running in a --watch context

Objectives

tests should run faster, and --watch should be a little more usable to tighten up development feedback loops. Ideally, I'd also like the test runners to have much less noise, since we use a LOT of code to scaffold test state that is not actually a part of what particular state is under examination in the test. This slows comprehension and makes the test files very bulky and cumbersome.

Remaining issues

Obviously, one of the tests is still not passing so... that's not good. It's a history test, and as soon as you have forked control flows parallel testing gets a little tricky. The reason I stopped debugging the test was because I could not properly rationalize what the intended behavior was supposed to be. I need to take a longer think about the changelog system we have, but I thought I'd leave this here to explain #420 a little bit - we do not need to pursue this PR, it is not a priority at the moment

I believe the fail() pattern came from jasmine down to jest in the first
instance, though I am not sure. In any event it leads to some strange
testing behavior in which you begin to re-invent the framework from the
inside out.

For this reason it seems the vite team has decided not to make a pass at
including it, since it is technically breaking containment even in the
context of jest testing.

I have done my best to take the time it needs to ascertain which of
these fails() should be asserts, and which should be using the standard
expectations api.
I have maintained the file-sequential execution style (which I will next
attempt to take away.) tests currently run through in ~99 seconds or so.
This is much longer than it needs to be, and so this is where the vitest
optimizations can really have an opportunity to shine.
The test speed is up roughly 10x but my expectation is that it should be
able to come lower than that. The beauty of the way test containment is
implemented here means that there is not much need for isolation - which
is good for simulating our use case because we make use of a lot of
singletons (though thankfully not TOO much shared state)

It was all relatively trivial with the exception of histories. I am
intensely anxious with the implementation that has been used for
document history, as it obscures control flow and does not look like it
will scale - especially if we were ever to want horizontal deployment.

All in all the tests are faster, and can tighten up development feedback
loops BUT the un-strict nature of our existing test runners means that
there are some inconsistencies in execution that I haven't yet uncovered
I still need a little guidance on what the area history behavior is supposed to be,
since it is not clear to me what approach is being used - it's only two
tests failing but it is indicative of my misunderstanding.

All tests resolve in about 8 seconds, which is quite nice
the vite config is also expressed in TS, so it should not be exempt
from linting
testing is quiiiiite noisy in testing
Thanks for adding the pagination tests!
@CocoisBuggy
Copy link
Contributor Author

I've merged all new needed stuff into this branch in case anyone wants to take a peak at a more up-to-date proposal

Vitest runs in ~8s
Existing test strat: ~90s
Speedup: 1125% (11.25x) 🚀

It should be said that this is not necessarily all down to vitest being that much faster than jest (though in my experience it is a bit faster). The real benefit comes from isolating the test states properly using vitest's fixture api. This lets us run the tests in parallel using their little rust runners under the hood.

I will also note that two of the history tests fail because I do not understand the history behavior well enough to redesign the tests

Shaving off some additional seconds and some extra logging noise, this
is now at a point where I would like it to be. The tests can be run
pretty much continuously through the development cycle and can be used
to track down all manner of gremlin

TODO: The history behavior is still a mystery to me. I need to know some
abstract about it so I can understand how to test it a bit better but I
have been avoiding looking closely at it
@CocoisBuggy
Copy link
Contributor Author

I need a little help with understanding how our history is supposed to work so I can write strict tests for it, but other than that this testing suite is a straight upgrade 🛩️

Hopefully this pleases someone as much as it pleases me 😅

I really did not enjoy this, if anyone wants me to rewrite the history
back end in a way that doesn't cause hair to fall out of your head I
would be more than happy to bring a more enterprise approach to
flattening this design into something more pleasing

~7s to run all the tests, which is not bad.
I'm sure there are other such issues
I have chosen a consistent filename format for our test files so that
they can be grepped with fewer flubs
@CocoisBuggy
Copy link
Contributor Author

Another checkpoint: All tests are ported.

This means this PR now represents a drop-in replacement for the current tests, except it will run much faster and the test infrastructure is designed with the full application in mind

A good day to all 😊

@CocoisBuggy CocoisBuggy marked this pull request as ready for review April 24, 2025 12:20
@CocoisBuggy
Copy link
Contributor Author

I really want to lobby for this since it represents a huge speed improvement in the feedback loop for developing features and fixing bugs. Does anyone have any concerns?

@CocoisBuggy
Copy link
Contributor Author

#318 would get addressed by this. All tests are comprehensively instrumented with fixtures that make writing consistent tests easier

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant