We like to think of technology as neutral, but machines can actually be just as biased as the humans who develop it.
A common objection to concerns about bias in machine learning models is to point out that humans are really biased too. This is correct, yet machine learning bias differs from human bias in several key ways that we need to understand.
Researchers at the Center for Information Technology Policy (CITP/Princeton University) try to answer the question “is artificial intelligence as biased as humans are?”. Connect with Ars Technica: Visit ArsTechnica.com: http://arstechnica.com Follow Ars Technica on Facebook: https://www.facebook.com/arstechnica Follow Ars Technica on Google+: https://plus.google.com/+ArsTechnica/videos Follow Ars Technica on Twitter: https://twitter.com/arstechnica Is AI as biased as humans are? | Ars Technica
In machine learning, no algorithm works equally well for all problems, a phenomenon researchers sometimes refer to as the “no free lunch theorem.” At the 2016 World Science Festival, cognitive psychologist Gary Marcus discussed what this theorem means and why it indicates that there’s no such thing as an unbiased algorithm. The World Science Festival gathers great minds in science and the arts to produce live and digital content that allows a broad general audience to engage with scientific discoveries. Our mission is to cultivate a general public informed by science, inspired by its wonder, convinced of its value, and prepared to engage with its implications for the future. Watch the full program here: https://youtu.be/zf4eM-NQ0TM Original Program Date: June 3, 2016 Subscribe to our YouTube Channel for all the latest from WSF. Visit our Website: http://www.worldsciencefestival.com/ Like us on Facebook: https://www.facebook.com/worldsciencefestival Follow us on twitter: https://twitter.com/WorldSciFest
Not matter how quickly artificial intelligence evolves, it can’t outpace the biases of its creators, humans. Microsoft Researcher Kate Crawford delivered an incredible speech on the subject, titled “The Trouble with Bias,” at Spain’s Neural Information Processing System Conference on Tuesday. Crawford discussed the different types of bias involved in computing. She specifically highlighted how machine learning might incorporate human biases like racial profiling, saying “instead of just thinking about machine learning contributing to decision making in, say, hiring or criminal justice, we also need to think about the role of machine learning in harmful representations of identity.” https://gizmodo.com/microsoft-researcher-details-real-world-dangers-of-algo-1821129334 http://www.wochit.com This video was produced by YT Wochit Tech using http://wochit.com
Princeton University researchers Arvind Narayanan, Aylin Caliskan and Joanna Bryson discuss their research on how human biases seep into artificial intelligence.
Artificial intelligence is being used to do many things from diagnosing cancer, stopping the deforestation of endangered rainforests, helping farmers in India with crop insurance, it help you find the Fyre Fest Documentary on Netflix (or Hulu), or it can even be used to help you save money on your energy bill. But how could something so helpful be racist? Become an Inevitable/Human: https://inevitablehuman.com/