-
Notifications
You must be signed in to change notification settings - Fork 3
Unit testing tools benchmark and DQMH example
Unit testing tools benchmark and DQMH example
This page describes a LabVIEW project with the following goals in mind:
- Present and compare the basic usage of the available unit testing tools for LabVIEW based on a DQMH example.
- Compare and explain the execution time of these tools.
GitHub URL of the LabVIEW project: link
Unit testing tools included in the project:
Prerequisites of running the LabVIEW project:
- LabVIEW 2018 installation,
- Installation of all unit testing tools listed above (note: InstaCoverage Pro is required),
- DQMH installation (note: the NI GOOP Development Suite add-on is also required by DQMH example used in this project).
If some of above requirements are not met, the project can still be opened for code inspection using LabVIEW 2018.
DQMH is a popular framework for creating distributed applications in LabVIEW. For the purpose of this demonstration we augmented the Thermal chamber project that is shipped as an example with the DQMH framework. We created three unit tests, which proved to be a non-trivial (measurable) workload for execution time measurements. We will now briefly explain one of the three unit tests. The structure of the remaining unit tests are very similar.
The unit test at hand is related to the basic heater state of the thermal chamber example. The heating can be switched on and off, which naturally corresponds to the on and off states of the heater. This unit test aims at testing the "Update Heater State" interface of the Thermal Chamber Controller module. For meaningful coverage measurement we refactored the original example a bit and moved the logic under test into \Libraries\Thermal Chamber Controller_DQMH\Action Update Heater State.vi
. This VI is the VI under test.
Note that the VI has two cases, one for switching on and another for switching off the heater. If only one of the two cases is tested, for example, then the test coverage will be 50% showing that some part of the code remained untested. The unit testing tools that support code coverage measurement are the Unit Test Framework and InstaCoverage.
The effect of the heater on/off action is that the Thermal Chamber Response Simulator module's internal state will be switched accordingly. Basically, this is the module that simulates the air of the room. In our unit tests, we compare the actual state of this module against the expected state. For example, after the switch on action we expect the heating to be on.
We implemented the unit tests using the various unit testing tools. The unit tests are functionally equivalent.
We created a separate LabVIEW project for each unit testing tool. The projects can be found in above GitHub repository.
Note that the approach of these tools in terms of how the unit tests are implemented may vary significantly. We now highlight the main properties of the various tools:
- Caraya implements an assertion-style approach where test suites, test cases and test results must be created in LabVIEW.
- The other tools (VI Tester, Unit Test Framework and InstaCoverage) implement a harness-style approach where the VI under test is embedded within a harness logic.
- VI Tester requires object-oriented programming but the VI under test needs to be written in object-oriented LabVIEW.
- Unit Test Framework and InstaCoverage support code coverage measurement, which can be very helpful for the interactive development of test suites.
In this benchmark we're interested in the time overhead of executing unit tests. Again, we execute three unit tests using four different unit testing tools. Note that each unit test may contain more than one use case.
In order to measure the scalability of the execution time of unit tests we repeated the execution of the three genuine unit tests. The execution times for the different tools are shown in the plot below (execution times are shown in seconds given the number of unit tests).
Note that unit tests are executed sequentially and all tools support some sort of API so that programmatic execution is possible for each tool. Again, we created a separated LabVIEW project for each unit testing tool. Every project contains a time measurement runner VI for executing the unit tests.
- Unit tests are not genuine in our scalability benchmark. We are working on a benchmark where scalability can be measured with meaningful (genuine) unit tests.
- The unit tests in our benchmark are implemented using the respective unit test generation functionalities of the respective unit testing tool and we didn't use the unit test generator script of the DQMH framework.
- In terms of execution times of ready-to-run unit tests, the fastest tool is Caraya, the VI Tester and InstaCoverage are the second fastest (with very similar execution times) and the Unit Test Framework is the slowest.
- The difference in execution times across the various tools is significant. Caraya is by one order of magnitude faster than VI Tester and InstaCoverage and, in turn, they are by one order of magnitude faster than the Unit Test Framework.
- The relative speed-up of Caraya compared to the other tools comes at a price of limited usability. First, Caraya does not explicitly support setup, teardown and harness logic. Second, Caraya's API does not allow the programmatic (indirect) execution of all unit tests of a project, within a folder etc.
- InstaCorage Core (a stripped down version of InstaCoverage) is not applicable here as it does not support programmatic execution. The execution time with InstaCoverage Core is on average 30-40% faster than InstaCoverage Pro as InstaCoverage Core does not measure code coverage.
- InstaCoverage (Pro) is not slower than VI Tester although InstaCoverage measures code coverage during the execution of the tests.
- The Unit Test Framework trades a rich feature set for complex (and sometimes inefficient) implementation. For example, this is the only too that supports the measurement of "project coverage".
- We speculate that the run times of the Unit Test Framework benefit from the fact, thanks to caching, that the same three unit tests are executed multiple times.