You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's starting to get unwieldy with the number of individual scripts, notebooks, and tests ("1296 changed files") and ~135k additions/deletions. I may even need to consider storing the files elsewhere or leaving them untracked and generating them on-the-fly during CI.
Perhaps still generate all unit tests (i.e., all combinations), but only select from those as determined by the hypothesis testing. Likewise, it would also be possible to have a unit test that generates the test scripts on the fly and then dynamically runs them. Kind of this weird inception of unit tests.
For example, with 19 runners per python version (2 cores per runner), and two python versions, it's taking ~16 minutes for unit tests to run with the linear constraint PR, which is close to double that of the last PR.
It's starting to get unwieldy with the number of individual scripts, notebooks, and tests ("1296 changed files") and ~135k additions/deletions. I may even need to consider storing the files elsewhere or leaving them untracked and generating them on-the-fly during CI.
Probably via: https://hypothesis.readthedocs.io/en/latest/
Recommended by @mseifrid in the context of honegumi.
The text was updated successfully, but these errors were encountered: