-
Notifications
You must be signed in to change notification settings - Fork 0
/
README
236 lines (176 loc) · 9.32 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
Persistent Memory Development Kit
This is src/test/README.
This directory contains the unit tests for the Persistent Memory Development Kit.
Unit tests require a config file, testconfig.sh, to exist in this directory.
That file describes the local machine configuration (where to find persistent
memory, for example) and must be created by hand in each repo as it makes no
sense to check in that configuration description to the main repo.
testconfig.sh.example provides more detail. The script RUNTESTS, when run with
no arguments, will run all unit tests through all build-types, running the
"check" level test.
DETAILS ON HOW TO RUN UNIT TESTS
See the top-level README for instructions on building, installing, running
tests for the entire tree. Here are some additional details on running tests.
Once the libraries are built, tests may be built from this directory using:
$ make test
A testconfig.sh must exist to run these tests!
$ cp testconfig.sh.example testconfig.sh
$ ...edit testconfig.sh and modify as appropriate...
Tests may be run using the RUNTESTS script:
$ RUNTESTS (runs them all)
$ RUNTESTS testname (runs just the named test)
Each test script (named something like "TEST0") is potentially run multiple
times with a different set of environment variables so run the test with
different target file systems or different versions of the libraries. To see
how RUNTESTS will run a test, use the -n option. For example:
$ RUNTESTS -n blk_nblock -s TEST0
(in ./blk_nblock) TEST=check FS=none BUILD=debug ./TEST0
(in ./blk_nblock) TEST=check FS=none BUILD=nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=none BUILD=static-debug ./TEST0
(in ./blk_nblock) TEST=check FS=none BUILD=static-nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=pmem BUILD=debug ./TEST0
(in ./blk_nblock) TEST=check FS=pmem BUILD=nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=pmem BUILD=static-debug ./TEST0
(in ./blk_nblock) TEST=check FS=pmem BUILD=static-nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=non-pmem BUILD=debug ./TEST0
(in ./blk_nblock) TEST=check FS=non-pmem BUILD=nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=non-pmem BUILD=static-debug ./TEST0
(in ./blk_nblock) TEST=check FS=non-pmem BUILD=static-nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=any BUILD=debug ./TEST0
(in ./blk_nblock) TEST=check FS=any BUILD=nondebug ./TEST0
(in ./blk_nblock) TEST=check FS=any BUILD=static-debug ./TEST0
(in ./blk_nblock) TEST=check FS=any BUILD=static-nondebug ./TEST0
Notice how the TEST0 script is run repeatedly with different settings for
the three environment variables TEST, FS, and BUILD, providing the test type,
file system type, and build type to test.
RUNTESTS takes options to limit what it runs. The usage is:
RUNTESTS [ -hnv ] [ -b build-type ]
[ -o timeout ] [ -s test-file ] [ -k skip-dir ]
[ -m memcheck ] [-p pmemcheck ] [ -e helgrind ] [ -d drd ] [ -c ]
[tests...]
Build types are: debug, nondebug, static-debug, static-nondebug, all (default)
Timeout is: a floating point number with an optional suffix: 's' for seconds
(the default), 'm' for minutes, 'h' for hours or 'd' for days.
Default value is 3 minutes.
Test file is: a name of the particular test script (test case).
all (default), TEST0, TEST1, ...
Memcheck, helgrind, drd and pmemcheck modes are:
auto (default, enable/disable based on test requirements),
force-enable (enable when test does not require given valgrind tool,
but obey test's explicit tool disable)
For example:
$ RUNTESTS -b debug blk_nblock -s TEST0
blk_nblock/TEST0: SETUP (check/debug)
blk_nblock/TEST0: START: blk_nblock
blk_nblock/TEST0: PASS
Since the "-b debug" option was given, the RUNTESTS run above only executes
the test for the debug version of the library and skips the other variants.
Running the TEST* scripts directly is also common, especially when debugging
an issue. Just running the script, like this:
$ cd blk_nblock
$ ./TEST0
will use default values for the environment, namely:
BUILD=debug
these defaults can be overridden on the command line:
$ cd blk_nblock
$ BUILD=nondebug ./TEST0
The above example runs TEST0 with the nondebug library, just as using
RUNTESTS with "-b nondebug" would from the parent directory.
In addition to overriding BUILD environment variable, the
unit test framework also looks for several other variables:
For tests that run a local program, insert the word "echo" in front
of the program execution so the full command being run is displayed.
This is useful to modify the command for debugging.
$ ECHO=echo ./TEST0
Insert the word "strace" in front of the local command execution:
$ TRACE=strace ./TEST0
Run test under Valgrind/memcheck:
$ MEMCHECK=force-enable ./TEST0
Display test run times:
$ TM=1 ./TEST0
DETAILS ON HOW TO WRITE UNIT TESTS
A minimal unit test consists of a sub-directory here with an executable called
TEST0 in it that exits normally when the test passes. Most tests, however,
source the file unittest/unittest.sh to use the utility functions for setting
up and checking tests. Additionally, most unit tests build a local test
program and call it from the TEST* scripts.
In additional to TEST0, there can be as many TEST<num> scripts as desired, and
RUNTESTS will execute them in numeric order for each of the test runs it
executes.
There are two ways of setting test requirements:
- using config.sh file located in a test sub-directory. It is the new and
recommended method but it is still under development and does not cover all
possible requirements. It is applied prior to the second method. If
config.sh includes test requirements which are not met, the test will not be
run at all. For details see config.sh.example.
- using require_* utility functions. It is the old but still supported
method. It can be used simultaneously with config.sh. Available require_*
utility functions are described below.
Tests can require a specific build types:
require_build_type debug
Most tests are short "make check" tests, designed to run quickly when
smoke-checking a build.
Using the unittest library, the C programs run during unit testing get their
output and tracing information logged to various files. These are named with
the test number embedded in them, so a script called TEST0 would commonly
produce:
err0.log log of stderr
out0.log log of stdout
trace0.log trace points from unittest library
vmem0.log trace from libvmem (VMEM_LOG_FILE points here)
Although the above log files are the common case, the TEST* scripts are free to
create any files. It is recommended, however, that the script creates files
that are listed in .gitignore so they don't accidentally get committed to the
repo.
The TEST* scripts typically use the shell function "check" to check their
results. That function calls the perl script "match" for any .match files
found. For example, to check out0.log contains the expected output, the test
author creates a file called out0.log.match and commits that into the repo.
The match script provides several pattern-matching macros that allow for normal
variation in the test output:
$(N) an integer (i.e. one or more decimal digits)
$(NC) one or more decimal digits with optional comma separators
$(FP) a floating point number
$(S) ascii string
$(X) hex number
$(W) whitespace
$(nW) non-whitespace
$(*) any string
$(DD) output of a "dd" run
$(OPT) the entire line is optional
$(OPX) ends a contiguous list of $(OPT)...$(OPX) lines, at least
one of which must match
The small C programs used for unit testing are designed to not worry about the
return values for things not under test. This is done with the all-caps
versions of many common libc calls, for example:
fd = OPEN(fname, O_RDRW);
The above call is just like calling open(2), expect the framework checks for
unexpected errors and halts the unit test if those happen. The result is
usually a very short, compact unit test where most of the code is the
interesting code under test. A full list of unit test macros is in
unittest/inittest.h.
The best way to create a new unit test is to start with an existing one. The
test:
blk_rw
makes an excellent starting point for new tests. It illustrates the common
idioms used in the TEST* scripts, how the small C program is usually
command-line driven to create multiple cases, and the use of unit test macros
like START, OUT, FATAL, and DONE. Every case is different, but a common
pattern for creating new unit tests is to follow steps like these:
$ cp blk_rw new_test_name
$ ...edit Makefile to add new_test_name to TEST list...
$ cd new_test_name
$ mv blk_rw.c to new_test_name.c
$ ...edit .gitignore, README, Makefile, new_test_name.c...
$ ...edit TEST0 to create first test case & get it working...
$ ...add more TEST* scripts as appropriate...
When a "check" type test is run (the default), each test should try to limit
the real execution time to a couple minutes or less. "short" and "long"
versions of the tests are encouraged, especially for stress testing.
PORTABILITY CONSIDERATIONS
unittest.sh defines a number of macros to support portability across POSIX
operating systems. See the Portability section of unittest.sh for details.
When a matching line count of an output file is required, use "$GREP -c" rather
than "$GREP | wc -l".
Use canonical ordering of program options and arguments, i.e., put all options
before any arguments. Not all operating systems support option reordering.