THE FUTURE IS HERE

Superintelligence: AI Futures and Philosophy with Sam Harris

I had a mind-bending fireside chat with Sam Harris at Tim Draper’s CEO Summit. We spoke about the Ethics of AI and what it tells us about humanity and our collective future. So much of his writing (at least the 5 books that I have read) can be seen through this perceptual prism to the future.

After a musical transition, we start with the inevitability and importance of AGI:

“This is the most important game that we’re playing in technology. Intelligence is the most valuable resource we have. It is the source of everything we value, or it is the thing we use to protect everything we value. It seems patently obvious that if we are able to improve our intelligent machines, we will. So, the only alternative is to not be able to do that. And if you look at the reasons why we might not be able to do that, those are, by definition, terrifying. These are civilizational catastrophes that prevent us from making improvements to hardware and software, permanently. There are many assumptions here that confuse people about this picture of inevitability. One is: many people assume we need Moore’s Law to continue or exponential progress. No, we just need progress; it can be as incremental as you like.”

“Many of you probably harbor a doubt that minds can be platform independent. There is an assumption working in the background that there may be something magical about computers made of meat.”

“Many people are common sense dualists. They think there is a ghost in the machine. There is something magical that’s giving us, if not intelligence per se, at the very least consciousness. I think those two break apart. I think it is conceivable that we could build superintelligent machines that are not conscious and that is the worst case scenario ethically.”

7:33 Vitalism / common-sense dualism vs the neural net inspiration.
9:55 Decentralized or Centralized, Network effects
12:08 Imbuing Morality and the Ethics of “Friendly” AI and Control vs. Cognitive Slavery
18:15 The path dependence of iterative algorithms: Survival instincts
19:40 Worse to have AI without consciousness
“We’re giving rise to a race of Gods. These machines, by definition, will be more important than us.”
23:01 Robot overlords and the selfish meme
26:50 Bostrom’s framework for valuing future lives
32:27 What are the moral and ethical implications? The Narrow/Dumb AI problem, Labor Dislocation
36:26 Competency Arms Race, bootstrapping like Alpha Go Zero
39:12 Trapping AI in a Black Mirror episode, consciousness, and why WestWorld is impossible
42:30 Could AI be more moral than us?
46:40 Free will
“One test for superintelligence will be can it get over the illusion of free will”
51:30 Our dinner debate with Moby and the philosophical mind
“I don’t have much respect for the boundaries between disciplines”
(context: I convened a dinner salon to introduce Sam Harris to Moby, and an intellectual romp ensued. Both of them are philosophy majors with a keen interest in meditation.)