Most existing machine learning classifiers are highly vulnerable to adversarial examples. In this video we deep dive into how adversarial examples generalize themselves and what we can do to combat the issue. We also discuss the latest research work on Adversarial Patch (Research at Google).
Please support me on Patreon : https://patreon.com/aijournal
Follow us on Twitter: https://twitter.com/aijournalyt