-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sanity checks #8
Conversation
… completely the same
These are not all the tests we discussed, right? |
What am I missing? What else should I add?
This is a photo I took from the last meeting (first three points are
related to testing)
![unnamed](https://user-images.githubusercontent.com/7677814/55576260-2b0f6300-5711-11e9-8540-d9270ef5ccbc.jpg)
|
GIVEN fixed hyperparameters for the FGSM attack and enough number of epochs for training |
@zvonimir Thank you, I updated PR based on your comment. I wasn't certain how to assert THEN results of attacks should be completely (or almost) the same precisely, so I did as follows:
Should I add something else? What do you think? |
Why would accuracies of NNs change before and after attack? I mean, your target NNs remain constant. So I don't get that part. Yep, perturbations should be the same. Meaning that the generated pairs of adversarial images should be (almost) identical. So not just average diff and so on, but the actual adversarial images should be the same. Georg mentioned opening a few pairs of images in photoshop and doing a diff there to make sure they are the same. |
Sorry, I didn't express myself precisely enough. What I meant is the following:
I added plotting of samples (image below). Four columns in the image below represent the following:
Regarding completely the same (values of pixels for) adversarial samples, they occur only when the attack is executed against the same NN twice (or two NNs with with same weights, i.e. trained with the same seed etc. - effectively same NN as verified in https://github.com/soarlab/AAQNN/pull/8/files#diff-d0b33b1baec7d17a5a87a9ce85c0f612). This is verified with this assertion:
and I added plotting of that (image below). Columns represent same values as in previous image. Do you have maybe any other idea for sanity check? To me the attack seems good for our use case. |
Now when I think about it, it might be the case that perturbation introduced by this attack is always of the same size because it just changes the image in the opposite direction of gradient for some eps. Nevertheless, if we measure robustness per quantization level, it's still a suitable attack. I believe an optimization approach would be more informative regarding the needed perturbation, i.e. results could vary depending on the quantization. For instance #7 |
I think we should maybe move this exchange to email so that Georg can participate as well. Could you please summarize all this in an email to Georg and me? Thanks! |
@zvonimir can I merge this branch? |
Yes, please go ahead and merge. |
Three tests introduced (all passing):
GIVEN a NN and fixed hyperparameters for the FGSM attack
WHEN the attack is executed twice against the NN
THEN results should be completely the same
GIVEN random seed and hyperparameters
WHEN two neural networks are trained
THEN they should be completely the same
GIVEN enough number of epochs for training
WHEN two neural networks with same architectures are trained (using the different seed)
THEN they should have similar accuracy