Share it with your friends Like

Thanks! Share it with your friends!


Artificial intelligence is everywhere – it selects the next video on YouTube and your route on Google Maps. But soon, AI will make decisions that significantly impact your life – diagnosing illnesses, performing surgery, and driving your kids’ school bus. The stakes are high. Bradley Hayes argues that we can only trust AI if we understand how it makes decisions and why. We need explainable AI. Bradley Hayes directs the Collaborative AI and Robotics Research Lab at the University of Colorado Boulder, where he is an Assistant Professor. As CTO of Circadence Corporation, he develops revolutionary educational programming to expand the cybersecurity workforce. He completed a Ph.D. in Computer Science at Yale University’s Social Robotics Lab and was a postdoctoral associate in MIT’s Interactive Robotics Group. Currently, he’s developing novel explainable AI techniques for safe human-robot collaboration. He will never say no to sushi. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at



Kono Dutch says:

if you tell a robot 8+9 = 0 IT will never figure out that you are wrong…. they are not reliable

shagull salim says:

Which type of decision-making (Emotional decision_making or rationl-decsion making) can be more difficult to embed in a robot (or artificial intelligence)? Why?
Can someone please answer my question????

Lee Carlson says:

When your GPS attempts to drive you over a cliff because it insists that there is a road in that location.

Oicub says:

The critical thinker project dot calm

Write a comment


DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!