Skip to content

Latest commit

Β 

History

History
351 lines (280 loc) Β· 15.5 KB

README.md

File metadata and controls

351 lines (280 loc) Β· 15.5 KB

Approval Tests implementation in Flutter πŸš€


codecov Pub License: APACHE Repository views Stars

Pub likes Pub popularity Pub points

πŸ“– About

Approval Tests are an alternative to assertions. You’ll find them useful for testing objects with complex values (such as long strings), lots of properties, or collections of objects.

Approval tests simplify this by taking a snapshot of the results, and confirming that they have not changed.

In normal unit testing, you say expect(person.getAge(), 5). Approvals allow you to do this when the thing that you want to assert is no longer a primitive but a complex object. For example, you can say, Approvals.verify(person).

I am writing an implementation of Approval Tests in Dart. If anyone wants to help, please text me. πŸ™

Thanks to Richard Coutts for special contributions to the approval_tests_flutter package.

Packages

ApprovalTests is designed for two level: Dart and Flutter.

Package Version Description
approval_tests Pub Dart package for approval testing of unit tests (main)
approval_tests_flutter Pub Flutter package for approval testing of widget, integration tests

πŸ“‹ How it works

  • The first run of the test automatically creates an approved file if there is no such file.
  • If the changed results match the approved file perfectly, the test passes.
  • If there's a difference, a reporter tool will highlight the mismatch and the test fails.
  • If the test is passed, the received file is deleted automatically. You can change this by changing the deleteReceivedFile value in options. If the test fails, the received file remains for analysis.

Instead of writing:

    testWidgets('home page', (WidgetTester tester) async {
        await tester.pumpWidget(const MyApp());
        await tester.pumpAndSettle();

        expect(find.text('You have pushed the button this many times:'), findsOneWidget);
        expect(find.text('0'), findsOneWidget);
        expect(find.byWidgetPredicate(
            (Widget widget) => widget is Text && widget.data == 'hello' && 
            widget.key == ValueKey('myKey'),
        ), findsOneWidget);
        expect(find.text('Approved Example'), findsOneWidget);
    });

Write this:

    testWidgets('smoke test', (WidgetTester tester) async {
        await tester.pumpWidget(const MyApp());
        await tester.pumpAndSettle();

        await tester.approvalTest();
    });

Suppose you wanted to confirm that a page loaded with all the widget you expected. To do this, perform an approval test by calling tester.approvalTest, and give your test a suitable name:

    testWidget('home page', () {
        await tester.pumpWidget(const MyApp());
        await tester.pumpAndSettle();

        await tester.approvalTest(description: 'all widgets load correctly');
    });

To include your project's custom widget types in your test, and to perform post-test checks, add calls to Approved.setUpAll() to your tests' setUpAll calls, like so:

    main() {
        setUpAll(() {
            Approved.setUpAll();
        });
    }

πŸ“¦ Installation

Add the following to your pubspec.yaml file:

dependencies:
  approval_tests_flutter: ^1.1.0

πŸ‘€ Getting Started

The best way to get started is to download and open the example project:

πŸ“š How to use

In order to use Approval Tests, the user needs to:

  1. Set up a test: This involves importing the Approval Tests library into your own code.

  2. Optionally, set up a reporter: Reporters are tools that highlight differences between approved and received files when a test fails. Although not necessary, they make it significantly easier to see what changes have caused a test to fail. The default reporter is the CommandLineReporter. You can also use the DiffReporter to compare the files in your IDE, and the GitReporter to see the differences in the Git GUI.

  3. Manage the approved file: When the test is run for the first time, an approved file is created automatically. This file will represent the expected outcome. Once the test results in a favorable outcome, the approved file should be updated to reflect these changes. A little bit below I wrote how to do it.

This setup is useful because it shortens feedback loops, saving developers time by only highlighting what has been altered rather than requiring them to parse through their entire output to see what effect their changes had.

Approving Results

Approving results just means saving the .approved.txt file with your desired results.

We’ll provide more explanation in due course, but, briefly, here are the most common approaches to do this.

β€’ Via Diff Tool

Most diff tools have the ability to move text from left to right, and save the result. How to use diff tools is just below, there is a Comparator class for that.

β€’ Via CLI command

You can run the command in a terminal to review your files:

dart run approval_tests:review

After running the command, the files will be analyzed and you will be asked to choose one of the options:

  • y - Approve the received file.
  • n - Reject the received file.
  • view - View the differences between the received and approved files. After selecting v you will be asked which IDE you want to use to view the differences.

The command dart run approval_tests:review has additional options, including listing files, selecting files to review from this list by index, and more. For its current capabilities, run

  dart run approval_tests:review --help

Typing 'dart run approval_tests:review' is tedious! To reduce your typing, alias the command in your .zshrc or .bashrc file

  alias review='dart run approval_tests:review'

or PowerShell profile

  function review {
      dart run approval_tests:review
  }

β€’ Via approveResult property

If you want the result to be automatically saved after running the test, you need to use the approveResult property in Options:

void main() {
  test('test JSON object', () {
    final complexObject = {
      'name': 'JsonTest',
      'features': ['Testing', 'JSON'],
      'version': 0.1,
    };

    Approvals.verifyAsJson(
      complexObject,
      options: const Options(
        approveResult: true,
      ),
    );
  });
}

this will result in the following file example_test.test_JSON_object.approved.txt

{
  "name": "JsonTest",
  "features": [
    "Testing",
    "JSON"
  ],
  "version": 0.1
}

β€’ Via file rename

You can just rename the .received file to .approved.

Reporters

Reporters are the part of Approval Tests that launch diff tools when things do not match. They are the part of the system that makes it easy to see what has changed.

There are several reporters available in the package:

  • CommandLineReporter - This is the default reporter, which will output the diff in the terminal.
  • GitReporter - This reporter will open the diff in the Git GUI.
  • DiffReporter - This reporter will open the Diff Tool in your IDE.
    • For Diff Reporter I using the default paths to the IDE, if something didn't work then you in the console see the expected correct path to the IDE and specify customDiffInfo. You can also contact me for help.

CommandLineComparator img

To use DiffReporter you just need to add it to options:

 options: const Options(
   reporter: const DiffReporter(),
 ),
Visual Studio code img Android Studio img

πŸ“ Examples

I have provided a couple of small examples here to show you how to use the package. There are more examples in the example folder for you to explore. I will add more examples in the future. Inside, in the gilded_rose folder, there is an example of using ApprovalTests to test the legacy code of Gilded Rose kata. You can study it to understand how to use the package to test complex code.

And the verify_methods folder has small examples of using different ApprovalTests methods for different cases.

JSON example

With verifyAsJson, if you pass data models as JsonItem, with nested other models as AnotherItem and SubItem, then you need to add an toJson method to each model for the serialization to succeed.

void main() {
  const jsonItem = JsonItem(
    id: 1,
    name: "JsonItem",
    anotherItem: AnotherItem(id: 1, name: "AnotherItem"),
    subItem: SubItem(
      id: 1,
      name: "SubItem",
      anotherItems: [
        AnotherItem(id: 1, name: "AnotherItem 1"),
        AnotherItem(id: 2, name: "AnotherItem 2"),
      ],
    ),
  );

  test('verify model', () {
    Approvals.verifyAsJson(
      jsonItem,
      options: const Options(
        deleteReceivedFile:
            true, // Automatically delete the received file after the test.
        approveResult:
            true, // Approve the result automatically. You can remove this property after the approved file is created.
      ),
    );
  });
}

this will result in the following file verify_as_json_test.verify_model.approved.txt

{
  "jsonItem": {
    "id": 1,
    "name": "JsonItem",
    "subItem": {
      "id": 1,
      "name": "SubItem",
      "anotherItems": [
        {
          "id": 1,
          "name": "AnotherItem 1"
        },
        {
          "id": 2,
          "name": "AnotherItem 2"
        }
      ]
    },
    "anotherItem": {
      "id": 1,
      "name": "AnotherItem"
    }
  }
}

Passed test example

❓ Which File Artifacts to Exclude from Source Control

You must add any approved files to your source control system. But received files can change with any run and should be ignored. For Git, add this to your .gitignore:

*.received.*

βœ‰οΈ For More Information

Questions?

Ask me on Telegram: @yelmuratoff.
Email: [email protected]

Video Tutorials

You can also watch a series of short videos about using ApprovalTests in .Net on YouTube.

Podcasts

Prefer learning by listening? Then you might enjoy the following podcasts:

Coverage

🀝 Contributing

Show some πŸ’™ and star the repo to support the project! πŸ™Œ
The project is in the process of development and we invite you to contribute through pull requests and issue submissions. πŸ‘
We appreciate your support. 🫰



Thanks to all contributors of this package