Share it with your friends Like

Thanks! Share it with your friends!

Close

Bias Traps in AI: A panel discussing how we understand bias in AI systems, highlighting the latest research insights and why issues of bias matter in concrete ways to real people.

Solon Barocas, Assistant Professor of Information Science, Cornell University
Arvind Narayanan, Assistant Professor of Computer Science, Princeton University
Cathy O’Neil, Founder, ORCAA
Deirdre Mulligan, Associate Professor, School of Information and Berkeley Center for Law & Technology, UC Berkeley
John Wilbanks, Chief Commons Officer, Sage Bionetworks

AI Now 2017 Public Symposium – July 10, 2017

Follow AI Now on Twitter: https://twitter.com/AINowInitiative
Subscribe to our channel: https://www.youtube.com/c/ainowinitiative
Visit our website: https://artificialintelligencenow.com

Buy/Stream:

Comments

silverskid says:

Cathy O'Neil's Fox Co. thought experiment suggests that the only way to define success is in terms of what counted as success prior to the AI intervention. If so, then AI instruments would be useless to introduce new standards/criteria/policy objectives etc. But isn't it possible for an engineer who (in her hypothetical) is serious about reversing discrimination could actually write the software so that it has to include a certain # of female applicants, and then establish criteria which can be used to establish the promise of future "success?" I think O'Neil's book and work are great, but in a case like this I'd be surprised if engineers would necessarily be stuck with a "past is prologue" scenario.

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×