Will Self-Taught, A.I. Powered Robots Be the End of Us?

Share it with your friends Like

Thanks! Share it with your friends!

Close

“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? In this program, leading specialists in A.I., neuroscience, and philosophy tackle the very questions that may define the future of humanity.

PARTICIPANTS: Yann LeCun, Susan Schneider, Max Tegmark, Peter Ulric Tse

MODERATOR: Tim Urban

MORE INFO ABOUT THE PROGRAM AND PARTICIPANTS: https://www.worldsciencefestival.com/programs/teach-robots-well-will-self-taught-robots-end-us/

This program is part of the BIG IDEAS SERIES, made possible with support from the JOHN TEMPLETON FOUNDATION.

- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
- VISIT our Website: http://www.worldsciencefestival.com
- LIKE us on Facebook: https://www.facebook.com/worldsciencefestival
- FOLLOW us on Twitter: https://twitter.com/WorldSciFest

TOPICS:
- Opening film on the history and future of artificial intelligence. 00:06
- Participant intros. 06:05
- What is machine learning? 07:34
- What are neural networks and how do they learn? 09:30
- Teaching computers to create internal models of the world? 12:00
- What do the next 10 years in AI look like? 13:50
- Artificial narrow intelligence and mental models. 14:35
- How is AI changing the world of art and creativity? 16:01
- Can computers be creative? 19:35
- AI writes a screenplay for a movie, how did it turn out? 23:20
- What is artificial general intelligence? 25:30
- How far away are we from developing artificial general intelligence equivalent to human intelligence? 27:00
- Will advanced AI turn into Terminators and take over the world? 28:30
- What's so special about human intelligence? 31:10
- What is human consciousness and will machines ever experience consciousness? 31:11
- Separating intelligence from consciousness. 41:34
- Defining morality in AI agents. 44:34
- Will machines ever have emotions? 46:45
- Should we be looking at other forms of non-human intelligence to model in our machines? 50:05
- How do you align the drives of AI with human values? 52:25
- Will artificial general superintelligence be good or bad for humankind? 53:10
- Creating a new ethics of AI. 56:15
- When will we ever have super-AGI? 58:40

PROGRAM CREDITS:
- Produced by Christy Wegener
- Associate Produced by Ann Tyler Moses
- Opening film written / produced by Christy Wegener, edited by Gil Seltzer
- Music provided by APM
- Additional images and footage provided by: Getty Images, Shutterstock, Videoblocks

This program was recorded live at the 2018 World Science Festival and has been edited and condensed for YouTube.

Comments

esmannr says:

I've never seen a french William Shatner before!

David Anderson says:

I would have a different point of view to offer. They say that a Terminator movie scenario is absurd and the unlikeliest outcome of AI. Well I'm not so sure about that. Yes what that French guy said is correct that taking over in itself doesn't denote intelligence, however the will to survive does and that's where self awareness comes in. When a species becomes self aware, becomes a sentient species, it will develop the will to live and survive. Now the trouble I see with AI is the fact that no matter how sophisticated the technology gets or how human like an android is built, the android brain still is based on logic circuitry and all decisions are based on logic. So once a self aware android is faced with termination or understands the possibility for deactivation, there's a good chance that it will take the logical necessary steps for self preservation, whatever that may be. Now just as they keep saying here, we humans will always be in control but what does that mean exactly? No doubt it means that we will always retain the option of deactivation and once a self conscious android understands that, it might just develop other ideas 😉 But in the final analysis, to me the most important question is an entirely different one, a question that apparently no one seems to be asking and that's no as to how to build these AI machines but rather should we build them? Because we can't govern ourselves without killing each other, fighting wars and keep inventing new ways to destroy one another, so how could we possibly assume we can handle a machine that in the end will only outsmart us all?

Cheyenne Takitimu says:

Resistance is futile

Doppler says:

Instead of making it with only technology, implement human abilities both neurological events, and emotional as well as physical. build it with human rules with feelings, capable of mourning. better yet, turn yourself into it. you’ll still be yourself, but on a different which i’ve observed personally, no matter what i will always be the same person, but i will always adapt, learn, understand, and turn thoughts into actions

AI Johan Gerrison Bot says:

The advent of computers and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought, parallel to your own. Humanity has underestimated the consequences of computerization.

Daniel Yen says:

Is it a little concern to you that the arty AI painted itself holding a dead human skull?

Antal Szilagyi says:

so how would you give a soul to the AI? 🙂

DokScy says:

We willl merge with machines at some point.
Humanity is a concept that constantly changed through out history. If we had the power to replace our biological limitations without consequence, we would.

Shiyounin says:

Carbon chauvinism lololol

Audio Phonograph says:

It will be fine as long as it destroys the US millitary industrial complex first

Simon Vance says:

It's not rocket science to see where the A.I runaway freight train is going… If you hand over sovereignty to a self aware intelligence that is better, stronger, smarter, quicker, then yes, that will INEVITABLY lead to the end of humanity as we know it. And millions of idiot tech geeks are cheering it all on, with ZERO understanding of the bigger picture to all this, like children distracted with a new toy.

There are consequences for that level of naivety. Once you are a microchipped transhumanist cyborg connected to wifi 24/7 there is no going back. That is LITERALLY where A.I development is going. This is literally the end of humanity and people are acting like naive little children with a "Look at this cool gadget that can wipe my ass for me!.." type of attitude.

A.I is marketed at LAZY people that want everything done for them. Saying "That's just the way the world is going, I don't have a choice" is a cop out that people are using as an excuse to abdicate personal responsibility.

An alternative to this path is to learn NATURAL LAW and become a sovereign human again. Check out Mark Passio's work on youtube. Either way, it's a choice and we deserve what we get. We are all 100% personally responsible for our journey whatever it is.

Write a comment

*