Share it with your friends Like

Thanks! Share it with your friends!

Close

Sam Harris and Steve Jurvetson discuss the future of artificial intelligence at Tim Draper’s CEO Summit.

Sam Benjamin Harris is an author, philosopher, neuroscientist, blogger, and podcast host.

Stephen T. Jurvetson is an American businessman and venture capitalist. He is formerly a partner of Draper Fisher Jurvetson.

November 17th, 2017

Buy/Stream:

Comments

jurvetson says:

Oh, I see you improved my video post. thanks. Here is the description, index and some juicy quotes: https://flic.kr/p/21DGQCw such as: “This is the most important game that we’re playing in technology. Intelligence is the most valuable resource we have. It is the source of everything we value, or it is the thing we use to protect everything we value. It seems patently obvious that if we are able to improve our intelligent machines, we will.”

“Many of you probably harbor a doubt that minds can be platform independent. There is an assumption working in the background that there may be something magical about computers made of meat.”

“Many people are common sense dualists. They think there is a ghost in the machine. There is something magical that’s giving us, if not intelligence per se, at the very least consciousness. I think those two break apart. I think it is conceivable that we could build superintelligent machines that are not conscious and that is the worst case scenario ethically.”

Peter Mathieson says:

At around the 52 minute mark, Sam Harris talks about being a multi-disciplinary omnivore. Fair enough, but his shallow understanding of (and frequent slagging of) the discipline of economics illustrates the problem facing those who would give to AI machines constraining/guiding values. Brilliant though he is, I would not want Sam Harris to be the one deciding which orienting values should be built into AI machines when it comes to economic understanding… and given his views, I am equally sure that the only economists he would want to be allowed near the AI code would be those who share his left-leaning viewpoint… and therein lies the problem. The question, as always, comes down to “Who decides?”. See “The Vision of the Anointed” by Thomas Sowell.

Peter Mathieson says:

Sam Harris is an exceptionally bright guy, but in this interview he and his interviewer both allowed themselves to fall – yet again in Harris’ case – into the “winner takes all” trap. There are many organizations – some state-sponsored and some profit-oriented – who are racing down this path. Some will get there sooner. Others will get there a bit later. But no contender is going to stop, and the reason is that winning a battle is not the same as winning the war. More important still, all the competitors will be modelling intelligence as they understand it, and they will all be attempting to constrain and direct that intelligence in accordance with the values held dear by their developers. State-sponsored Russian AI will be guided by Russian state values. State-sponsored Chinese AI will be guided by and will try to maximize what is valued by the Chinese state. Ditto any state-sponsored AI, and in a democracy where parties change and the values hovering the values of those parties change, so will the values governing their AI machines…. and spare a thought for the values that will be built into the AI machines built by jihadist regimes. There won’t be one machine. There will be many. The intelligence of those machines will escalate exponentially, but these super-human intelligences will not be perfect, they will each start off with the differing and flawed value systems of their creators, and the inherent flaws of those value systems will unleash upon the world immensely powerful and deeply flawed gods that will battle among themselves, putting all life on earth in peril.

Preguntate says:

Imagine if a super AI, a superior and more powerful intelligence do with us the same as what we do to other less powerful species of sentient beings non humans: cows, chicken, pigs, fish. If the values of the AI were like ours: "might makes right" we are as human beings f*cked as we are f*cking the animals. That is what we do to animals we exploit them, enslave them, torture them, rape them and kill them for their meats, milks, eggs when we even actually do not need those products to be healthy. We do not have any respect for less powerful criatures. So why an AI would have any more respect for us than the one we have for other non human animals?

SlicedBananas says:

Excuse me if I sound stupid, but aren't devices like Google home, Alexa and smart assistants in general proof that intelligence does not require being conscience? Yes, intelligence is a scale and perhaps once a certain level is reached it requires being conscience, but aren't these home assistants intelligent?

judgeomega says:

both the speaker and the host were marvelously eloquent, coherent, and insightful. its a shame they didnt have another couple hours to go even deeper.

johan smit says:

…wonders if AI will make stupid mistakes like publishing radio articles to a video channel if it replaces humans. It will be really great to have such a high intelligense behind video channel contributers.

Benjamin Crouzier says:

It's great that Jurvetson can think at roughly the same level as Sam Harris and follow his train of thought. Not many interviewers can do that

Captain Green says:

Wonderful discussion, probably one of my favorites on the topic.

Lyle Nisenholz says:

Maybe we can set up a symrobotic relationship like our gut biomes. Super intelligent machines need us somehow and we have some effect on their emotional landscape.

Baf Lange says:

Nature doesn’t seem to want us tho, intelligence is a disease, the smarter any animal is, historically, the sooner it becomes extinct.

CandidDate says:

This is pure sci-fi.

Abide says:

Can you show me the "Blue Fairy"?

Valerie Kneen-Teed says:

The only basis for value that you both seem to be recognizing here are only pleasure/pain sensations, intelligence and productivity. What about BEING? Does being not have value, even if just for the sake of being? What is it about a person that gives him/her energy or life? What makes Elon Musk different for walking across town to go to a birthday party when he was a child? Is a flower less valuable because it cannot use human tools? Should all the chickens be killed after we stop eating animals?

Valerie Kneen-Teed says:

The main concern I have, is how the seed of moral understanding is defined. Relative morality has no meaning as it is ever-changing. Morality requires some form of an absolute center to gravitate around.

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×