Share it with your friends Like

Thanks! Share it with your friends!


Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at

Follow TED news on Twitter:
Like TED on Facebook:

Subscribe to our channel:


  • Rating:
  • Views:112,781 views
  • Tags: -
  • Categories: TED


Stem Factory says:

7:12, That's what I'm hoping for.

Stem Factory says:

5:11, It's just something we'll have to figure out together.

Stem Factory says:

4:57, that's probably how it thinks of us.

Stem Factory says:

1:16, Just faces, I assumed vocal analysis would be a part of it too
Maybe the networks could broadcast that info during the next presidential debates

Keith Bell says:

Notice the reaction of her co-worker. The reality of the problems in her efforts was too much to bear so she immediately "ran away" as fast as she could. Not just insulting, but you know the potential of the damage it could produce also disturbed her.
Doubtful that any "perfect" machine could have explained what she said in this presentation as great as she did.
Unless of course an "imperfect" HUMAN programmed it to say it…

ASMRSounds says:

She is so fucking hot!

GTaichou says:

Code a "show your work" output.

What do we do when someone comes to a conclusion we don't understand? We ask them how they arrived at it; and often get an answer even if it's messy and doesn't make sense. I understand everything is easier said than done, but if we can program a computer to learn, why not program a computer to explain its logic. If this, if this, if this, if this all at once, then that ad, that application, that identifier.

Ben DeVeny says:

Its an interesting talk, as most TEDs are. But cherry picking examples like those two criminals, is pretty weak. A sample of two people is bad science. It may or may not be accurate. But the method is bad.

Saulo Zion says:


Banana Trish says:

I enjoy lying online.

Beshr Al Khateeb says:

But why don't we just let AI do its thing then check for the results, and if we don't like them we can impose certain pre-programmed rules on the AI.

For example, if we find the AI is weeding out people with potential for depression (to use her example), we can impose a pre-programmed rule on the AI to not weed out people who could potentially get depressed in the coming 3 years or whatever.
My point is that we should not stop progress just because we fear the potential consequences. In fact, history has shown that science will progress anyway regardless of our fears. Instead, I say we go ahead with progress in an iterative trial and error manner like I explained in the example above.

prathamesh sonar says:

facial expressions 🙂

Stéphane Halimi says:

The real problem with deep learning aren't algorithms but data. Only data are biased, but that's a very big problem

Detlef Roters says:

it is inportant what you talk about

Blast of Fresh Air says:

should design a computer to compute philosophy before we create ones for war etc and see what it finds

Alex Kunz says:

So we need an algorithm that checks to see if other algorithms are biased. :p

James says:

You mean when AI proves your moral bias is utter BS, and that makes you uncomfortable?

Rogelio Moisés Castañeda says:

I totally agree, computing machines, as any other tool humanity has created, it's just an extension of our capabilities, not a replacement.

BlastForward says:

Maybe the people with higher risk of depression have that risk because there are such machine learning "algorithms" that deny them jobs?

Stéphane Surprenant says:

I believe that the point of this talk extends beyond computer algorithms. Many people do not sufficiently appreciate the power they have over the lives of others. The executive who turned her back on the expert here, likely because her questions and doubts were uncomfortable, likely behaves like this routinely. I like to say, contra the Godfather, that nothing truly is business and everything is personal.

guru says:

The only real reason the human race will become more moral, is when we (mankind) learn the just recently revealed ultimate Truth of Life. The truth explains the nature of everything, it explains the big picture of life in every facet, it explains our true history and true purpose. Google truthcontest and read the present

David Mendez says:

Steven universe's dog copter

nivolord says:

Tip: Don't watch too many of these videos. Machine learning algorithms might judge you dangerous for their survival, or worse, they might try and sell you philosophical books on moral decisions.

Christos Panagiotidis says:

9:53 Eddi von RBTV?!

kght222 says:

6:10 when it comes to the data that the machine might use to hire someone, that data was input by a human. its accuracy is itself subjective but the computer doesn't know that. as humans we can recognize where the machine would have deficiencies like that and not use it for things like that until we have fed it objective information (security cameras are a thing ;P).

Write a comment


DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!