SAVER automatically generates safe patches for three kinds of memory-related errors common in C programs: Memory Leak, Use-After-Free, and Double-Free. SAVER works on large C programs up to 320K lines of code while still maintaining the key property of making safe patches. These patches are safe in that if SAVER succeeded in generating patches for a program, applying these patches does not introduce new errors on that program. For further technical details, please consult our paper.
We provide two ways to install SAVER:
-
A VirtualBox image containing all resources to reproduce the main results of our paper: ICSE20_SAVER_artifacts.tar.gz -
The link for the VirtualBox image is currently borken. Instead, you can download the binary tarball from the link provided. For those who might encounter difficulties running the binary due to environment issues, the Cerberus platform might be a helpful alternative.
-
A full copy of SAVER in its binary executable form: saver.tar.gz
Please see INSTALL.md for full installation instructions and basic usage of SAVER.
The job of SAVER is to generate a patch that fixes memory-errors for a given program and a report that describes the error. The workflow of SAVER can be summarized as follows:
- Frontend & Pre-analysis
- SAVER uses Facebook Infer's frontend to generate an intermediate representation of the given program (i.e., CFG).
- SAVER conducts flow-insensitive points-to analysis as a pre-analysis to compute data-dependencies and a call-graph of the program.
- Error Localization
- Based on the results of the pre-analysis, SAVER identifies which program parts are relevant to the given error.
- Invariant Generation
- SAVER conducts path-sensitive heap analysis to compute an object flow graph (OFG) which summarizes heap-related behaviors.
- Patch Generation
- Based on the OFG, SAVER finds a patch that safely fixes the given error.
For technical details and algorithms behind SAVER, please consult our paper.
The VirtualBox image contains the following contents in the directory ~/ICSE20_artifacts/
-
table1
contains all necessary files needed to reproduce Table 1 of our paper.benchmarks
contains our benchmarks as whole repositories, which are modified from the original to insert modeling code or to remove test modules of each program. Each of the repositories is equipped with scripts necessary for running FootPatch.resources
contains the following:benchmarks_org
contains the original versions of repositories intable1/benchmarks
. We included them for comparison with the modified ones.patches
contains manual source-code-level translations of SAVER's patch instructions. We prepared them because SAVER generates patch instructions in IR-level, which can be cumbersome to inspect.error_reports
contains error reports in.json
files stating where the error starts and ends (i.e. sources and sinks).benchmarks.json
records the configuration for each benchmark, the total number of alarms (generated by Infer-0.9.3), and which of them are true.
table1.py
compiles benchmarks and runs SAVER on them.
-
table2
contains all necessary files needed to reproduce Table 2 in our paper.benchmarks
contains our benchmarks as whole repositories and they are separated according to error types and versions, since they are checked out on different commit IDs. They are also modified from the original due to build issues.resources
contains the following:benchmarks_org
andpatches
as above(table1)
.bug.json
records the following for each benchmark:- the URL of the original repository
- the commit ID where the error was fixed
- the commit ID by which we checked out when building the program
- necessary commands for building the repository
- the (sub)directory to start building, i.e. to run the commands
table2.py
compiles benchmarks and runs SAVER on them.
-
tests
contains toy benchmarks for testing SAVER.
Here we explain how to reproduce our main results (i.e., effectiveness of fixing errors).
- Go to the directory:
~/ICSE20_artifacts/table1/
and run the script by:python3 table1.py --saver
- This script first configures and compiles each benchmark. Then it runs the pre-analysis of SAVER and generates patches for each benchmark.
saver.results
shows the overall results: the time taken to generate patches and the number of generated patches for each benchmark.- Patches will be generated in
results/saver
directories..log
files show the running result of SAVER, which include the final patch instructions.
- (Optional) If you want to reproduce results of FootPatch:
python3 table1.py --footpatch
- A file named
footpatch.result
will be created, and it will show how long it took to generate patch for each error-report. - Patches will be generated under
results/footpatch
.
- A file named
We also have written README.md
in ~/ICSE20_aritacts/table1
for further details of the experiment.
- Go to the directory:
~/ICSE20_artifacts/table2/
and run the script by:python3 table2.py --saver
- This script first configures and compiles each benchmark. Then it runs the pre-analysis of SAVER and generates patches for each benchmark.
saver.results
shows the overall results, i.e., the Table 2 of the paper.- Patches will be generated in
results
directories..log
files show the running result of SAVER, which include the final patch instructions.