Tricking Artificial Intelligence by Adversarial examples | Patch (Google)

Share it with your friends Like

Thanks! Share it with your friends!

Close

Most existing machine learning classifiers are highly vulnerable to adversarial examples. In this video we deep dive into how adversarial examples generalize themselves and what we can do to combat the issue. We also discuss the latest research work on Adversarial Patch (Research at Google).

Research papers:
https://arxiv.org/abs/1712.09665
https://arxiv.org/abs/1412.6572
https://arxiv.org/abs/1607.02533
https://arxiv.org/abs/1602.02697

Please support me on Patreon : https://patreon.com/aijournal

Follow us on Twitter: https://twitter.com/aijournalyt

Comments

Fahd Mahraz says:

please what is difference between the approach proposed in the paper Adversarial Patch, by T. B. Brown et al. and the approaches presented by other authors.

George supreeth says:

Nicely explained! Thank you.

Anukool Srivastava says:

woow really knowledgeable..
what's the method to defend adversarial attacks though 💭 💭..

Write a comment

*