Skip to content
This repository has been archived by the owner on Jan 10, 2024. It is now read-only.

Latest commit

 

History

History
63 lines (52 loc) · 4.64 KB

THEMES.md

File metadata and controls

63 lines (52 loc) · 4.64 KB

If you're exploring this repository without having sniffed a specific smell, feel free to explore the examples by themes.

Subjects lack focus and intention in their responsibility

Work through the following examples, giving careful thought to the concepts of Single Responsibility Principle. Subjects should have one purpose, whether it's calculation or collaboration. When those two are mixed, tests become hard to write and understand.

Tests aren't isolated and are testing more than just a single unit

Work through the following examples, giving careful thought to role of the test within the test suite, being sure to consider what experiment the test is conducting.

Testing too much

Work through the following examples, analyzing the costs and benefits of each test.

  • Invisible Assertions: Sky might fall / Doesn't blow up.
  • Quixotic: Overly integrated journey.
  • Long: Failing to slice numerous concerns into individual, focused test cases.
  • Generative: Testing numerous redundant examples, watering down the degree to which the test expresses your intention.
  • Paranoid: Test covers edge cases that aren't actually possible.
  • Premature Assertions: Believing more assertions are always better.
  • Test by Number: Wrote test + checked box.

Test's value is unclear

Work through the following examples, keeping in mind that a test is a means of communication. A test and the code that it is testing should be as self documenting as possible because a test is read far more times than it is written.

  • Quixotic: Test doesn't clearly document a single unit.
  • Missing Assertions: Some code paths aren't tested. Corners of code are neglected.
  • Indecisive: Environment specific tests - what does a failure mean?
  • Long: when this test fails, what's actually broken?
  • Paranoid: What input actually triggers logical branching in code?
  • Self Important Test Data: If the test data had fewer properties, could the subject code still be verified?
  • Fantasy: Test dependencies aren't realistic.
  • Surreal: Taking Contaminated Test Subject and Mockers without Borders to the extreme.

Test erodes confidence

Work through the following examples, considering that the purpose of a test is to provide confidence that code works.

  • Fire and Forget: Performing assertions before setup or actions have completed.
  • Plate Spinning: Test depends on multiple things happening successfully before assertions pass.
  • Litter Bugs: Tests don't cleanup, possibly order dependent.
  • Time Boms: Tests that fail based on time.