Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”
TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at http://www.ted.com/translate
Follow TED news on Twitter: http://www.twitter.com/tednews
Like TED on Facebook: https://www.facebook.com/TED
Subscribe to our channel: http://www.youtube.com/user/TEDtalksDirector
7:12, That's what I'm hoping for.
5:11, It's just something we'll have to figure out together.
4:57, that's probably how it thinks of us.
1:16, Just faces, I assumed vocal analysis would be a part of it too
Maybe the networks could broadcast that info during the next presidential debates
7:20 9:28 9:55
Notice the reaction of her co-worker. The reality of the problems in her efforts was too much to bear so she immediately "ran away" as fast as she could. Not just insulting, but you know the potential of the damage it could produce also disturbed her.
Doubtful that any "perfect" machine could have explained what she said in this presentation as great as she did.
Unless of course an "imperfect" HUMAN programmed it to say it…
She is so fucking hot!
Code a "show your work" output.
What do we do when someone comes to a conclusion we don't understand? We ask them how they arrived at it; and often get an answer even if it's messy and doesn't make sense. I understand everything is easier said than done, but if we can program a computer to learn, why not program a computer to explain its logic. If this, if this, if this, if this all at once, then that ad, that application, that identifier.
Its an interesting talk, as most TEDs are. But cherry picking examples like those two criminals, is pretty weak. A sample of two people is bad science. It may or may not be accurate. But the method is bad.
Incredible!
I enjoy lying online.
But why don't we just let AI do its thing then check for the results, and if we don't like them we can impose certain pre-programmed rules on the AI.
For example, if we find the AI is weeding out people with potential for depression (to use her example), we can impose a pre-programmed rule on the AI to not weed out people who could potentially get depressed in the coming 3 years or whatever.
My point is that we should not stop progress just because we fear the potential consequences. In fact, history has shown that science will progress anyway regardless of our fears. Instead, I say we go ahead with progress in an iterative trial and error manner like I explained in the example above.
facial expressions 🙂
The real problem with deep learning aren't algorithms but data. Only data are biased, but that's a very big problem
it is inportant what you talk about
should design a computer to compute philosophy before we create ones for war etc and see what it finds
So we need an algorithm that checks to see if other algorithms are biased. :p
You mean when AI proves your moral bias is utter BS, and that makes you uncomfortable?
I totally agree, computing machines, as any other tool humanity has created, it's just an extension of our capabilities, not a replacement.
Maybe the people with higher risk of depression have that risk because there are such machine learning "algorithms" that deny them jobs?
I believe that the point of this talk extends beyond computer algorithms. Many people do not sufficiently appreciate the power they have over the lives of others. The executive who turned her back on the expert here, likely because her questions and doubts were uncomfortable, likely behaves like this routinely. I like to say, contra the Godfather, that nothing truly is business and everything is personal.
The only real reason the human race will become more moral, is when we (mankind) learn the just recently revealed ultimate Truth of Life. The truth explains the nature of everything, it explains the big picture of life in every facet, it explains our true history and true purpose. Google truthcontest and read the present
Steven universe's dog copter
Tip: Don't watch too many of these videos. Machine learning algorithms might judge you dangerous for their survival, or worse, they might try and sell you philosophical books on moral decisions.
9:53 Eddi von RBTV?!
6:10 when it comes to the data that the machine might use to hire someone, that data was input by a human. its accuracy is itself subjective but the computer doesn't know that. as humans we can recognize where the machine would have deficiencies like that and not use it for things like that until we have fed it objective information (security cameras are a thing ;P).