Skip to content
This repository has been archived by the owner on Oct 18, 2019. It is now read-only.

Combat adversarial examples/training? #44

Open
AaronYALai opened this issue Mar 15, 2017 · 0 comments
Open

Combat adversarial examples/training? #44

AaronYALai opened this issue Mar 15, 2017 · 0 comments

Comments

@AaronYALai
Copy link

Thanks for open this amazing trained model for all!
Just wonder that if there is a way to combat adversarial examples or malicious training like this:
https://github.com/tjwei/play_nsfw

since it reverses results of classification (attack this model?) through its algorithm.

Thanks again for this model !

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant