Skip to content
This repository has been archived by the owner on Apr 12, 2024. It is now read-only.

Faster test suite feedback and output. #7

Open
aviflombaum opened this issue Sep 2, 2015 · 2 comments
Open

Faster test suite feedback and output. #7

aviflombaum opened this issue Sep 2, 2015 · 2 comments

Comments

@aviflombaum
Copy link
Member

Not sure if to raise this here or https://github.com/flatiron-labs/learn-co/ but when running learn to run the test suite of a lesson, the first thing it does is authenticate, which is slow, is prone to environmental failure, and delays feedback of test results.

When learn is evoked and determined to run a lesson's test suite, the priority should be providing the student with meaningful output about the test suite/run. Ideally running learn should be as quick as running rspec to the user. Currently, running learn adds a significant delay in the feedback cylce mostly do to authenticating and submitting the test results to Learn.

Suggestions

  1. Re-order procedure of learn-test to first run test suite and then authenticate and submit.

learn should immediately try to run the test suite for the user and provide output. Upon a run of the test suite, assuming we can rescue complete build/interpreter/compilation failures. we should only then connect to learn and submit the results. But by then, the learner has seen the test suite's output and has even, ideally, be dropped back into an active prompt while the submission of the test runs occur entirely behind the scenes.

  1. Optimize Authentication so it isn't so slow

Through caching or assuming the best from the user (they aren't trying to run it for someone else's credit), do we really need to authenticate every time? We can just assume they are correctly logged in, submit the post with a cached token, then run the test suite. If the POST fails, we can ask them to re-login or trouble shoot.

@matbalez-zz
Copy link

Agree with desire to speed up test running. Good suggestions! @loganhasson should comment on feasibility.

@aviflombaum
Copy link
Member Author

Ya, given how isolated this gem is and that is has coverage - I'd love to see if an 'off-semester' instructor could fix these issues.

cc @jmburges

Steps would be:

  1. Running a coverage report of how much of the current code has test coverage. Assuming adequate coverage and patterns for a contributor to mirror, they could continue by adding tests that prove the deficiency, coding a fix, seeing green, and submitting a PR for review.
  2. If coverage report is inadequate for safe refactoring and additions, first get coverage up to confidence, and then continue with steps above.

If someone picks this up, it'd be nice to add CI Badges on Coverage, Quality, and Dependencies to README, in addition to CONTRIBUTING guidelines / wiki.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants