If you're exploring this repository without having sniffed a specific smell, feel free to explore the examples by themes.
Work through the following examples, giving careful thought to the concepts of Single Responsibility Principle. Subjects should have one purpose, whether it's calculation or collaboration. When those two are mixed, tests become hard to write and understand.
- Missing Assertions: Subject does too much, making edges hard to test.
- Bury the Lede: Setup too complex.
- Chafing: Test pain -> Test design instead of code design.
- Tangential: Mixing levels of abstractions / types of responsibilities.
- 7 Layer Testing: Test can't escape interaction with God object.
- Contaminated Test Subject: Very large or stateful subject.
- Invasion of Privacy: Testing code that should be its own unit.
- X-Ray Specs: Tests secretly edit private state. Subject has too many responsibilities.
Work through the following examples, giving careful thought to role of the test within the test suite, being sure to consider what experiment the test is conducting.
- Tangential: Test subject depends on unrelated things.
- 7 Layer Testing: Test subject depends on "reusable blocks" below subject.
- Complex Assertions: Test relies on side effects.
- Indecisive: Inconsistent setup causes tests to check data's state.
- Premature Assertions: Lack of confidence in subject's dependencies
- Mockers without Borders: Some dependencies are polluting the test, some are mocked.
Work through the following examples, analyzing the costs and benefits of each test.
- Invisible Assertions: Sky might fall / Doesn't blow up.
- Quixotic: Overly integrated journey.
- Long: Failing to slice numerous concerns into individual, focused test cases.
- Generative: Testing numerous redundant examples, watering down the degree to which the test expresses your intention.
- Paranoid: Test covers edge cases that aren't actually possible.
- Premature Assertions: Believing more assertions are always better.
- Test by Number: Wrote test + checked box.
Work through the following examples, keeping in mind that a test is a means of communication. A test and the code that it is testing should be as self documenting as possible because a test is read far more times than it is written.
- Quixotic: Test doesn't clearly document a single unit.
- Missing Assertions: Some code paths aren't tested. Corners of code are neglected.
- Indecisive: Environment specific tests - what does a failure mean?
- Long: when this test fails, what's actually broken?
- Paranoid: What input actually triggers logical branching in code?
- Self Important Test Data: If the test data had fewer properties, could the subject code still be verified?
- Fantasy: Test dependencies aren't realistic.
- Surreal: Taking Contaminated Test Subject and Mockers without Borders to the extreme.
Work through the following examples, considering that the purpose of a test is to provide confidence that code works.
- Fire and Forget: Performing assertions before setup or actions have completed.
- Plate Spinning: Test depends on multiple things happening successfully before assertions pass.
- Litter Bugs: Tests don't cleanup, possibly order dependent.
- Time Boms: Tests that fail based on time.