Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run all fuzz tests with a single command? and is mvn jqf:fuzz coverage guided? #176

Closed
tech-bee-10 opened this issue Feb 2, 2022 · 4 comments

Comments

@tech-bee-10
Copy link

Hi,
I was thinking of integrating with a spring service which has many files. Writing fuzz tests on multiple files, and running mvn jqf:fuzz command individually for each fuzz test seems to be time taking and lengthy, is there a way to run a command which automatically scan for all @fuzz targets which are in classes annotated with @RunWith(JQF.class)?

Also, running with mvn jqf:fuzz report coverage of 0.00%. Does it mean running with mvn jqf:fuzz command is not coverage guided fuzzing and inputs are not generated using feedback from previous test inputs?

@rohanpadhye
Copy link
Owner

If coverage report is 0, then it means something is wrong. Classes are not being instrumented for coverage. Try out the tutorials on the wiki (links from README) or the standalone example to see what coverage stats should look like.

For running all tests annotated with @Fuzz: there are two answers here. First, all JQF tests are also JUnit tests. So, if you just run mvn test as usual all of them will be run. However, these runs will only perform random sampling of the generators, without coverage feedback, and stop fuzzing if no errors are found after a fixed number of runs. In essence, this behaves similarly to quickcheck-like property testing. Second, if you want to fuzz with the full coverage feedback and other settings, you will have to run mvn jqf:fuzz multiple times. You can of course enumerate multiple executions of the plugin in your pom.xml to run all tests by default with the test scope or something similar, but class and method names will have to be enumerated.

If you really the need ability to run all @Fuzz tests with coverage guidance in one command, I can consider adding such a feature to the Maven plugin.

@tech-bee-10
Copy link
Author

tech-bee-10 commented Feb 3, 2022

Thanks for the response. I added some code and coverage went from 0.00 to 0.01%. So, it seems to work. But I couldn't understand what exactly are the values in the report like: Cycles completed, Queue size, Total coverage, Valid coverage etc. Is there a documentation mentioning the meaning of these values. That would be really helpful.

Also, I tried fuzzing by including mocks and it works seamlessly which is what I was looking for :)

I would like to propose these additions which will be a great value add:

  1. Running a fuzz test adds some overhead to scan project and build snapshot. Enumerating command for multiple tests will duplicate this overhead for all tests and takes of time. Running all tests with single command will make it more efficient.
  2. For the argument we are passing "-Dtime", some part of it is taken for the project building overhead. It would be great if there is way to have 2 arguments, for for overall execution, and one for just running the tests execution.
    For ex: I gave -Dtime=20s, and 7 sec is taken for the build and only 13 sec is used for running the fuzz tests. It would help if there is a way to mention I want it to timeout at 30 sec for overall execution and take 20 sec just to run fuzz test on multiple inputs.

Thoughts?

@rohanpadhye
Copy link
Owner

Thanks for the feedback! A lot of the terminology is borrowed from AFL, but I've been meaning to improve documentation in JQF or use simpler terms (#138 has been open for a long time).

I will try to add support for running multiple fuzzing targets in one invocation of the Maven plugin.

For -Dtime, it actually refers to fuzzing time only. Project building (i.e., compilation and pom.xml analysis) is not counted. The clock starts ticking when the fuzzing engine is created and is about to run the first test. However, since Java classes are dynamically loaded, the time required to load and initialize classes may cause the first few inputs to execute more slowly. We cannot avoid this, because we are never sure when we are "done". A new class can be loaded in say the 10,000th input also if the program executes a new line of code that requires a class that has not been loaded before. The JVM loads a class the first time an instruction referencing the class is executed.... We do not have an easy way to stop/start the clock during this process. It is similar to if the test was performing some expensive I/O operations. JQF cannot really separate CPU time versus I/O time. It just counts wall clock time after the first call to the test method.

@rohanpadhye
Copy link
Owner

I will try to add support for running multiple fuzzing targets in one invocation of the Maven plugin.

Going back to the original design decisions of the Maven plugin, here are my thoughts. Currently, you can fuzz all the methods in your application by simply running mvn test. All fuzz targets will be fuzzed without coverage guidance for a fixed number of trials. This is currently hardcoded to 100 to conform with junit-quickcheck, but we can easily make this a parameter that's configurable on the command-line or via the @Fuzz annotation. This is useful for quick validation of code changes by means of rapid property testing.

I can't immediately think of a good use case for running coverage-guided fuzzing on all test targets, because typically the coverage guidance only pays off when running for a long time, e.g. several hours. At this time scale, the overhead of having to re-run Maven for the next fuzzing target is going to be a very tiny fraction of the overall cost of fuzzing. Are you running coverage-guided fuzzing for only a few seconds at a time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants