THE FUTURE IS HERE

The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce

“What role does metaethics play in AI alignment and safety? How might paths to AI alignment change given different metaethical views? How do issues in moral epistemology, motivation, and justification affect value alignment? What might be the metaphysical status of suffering and pleasure? What’s the difference between moral realism and anti-realism and how is each view grounded? And just what does any of this really have to do with AI?

The Metaethics of Joy, Suffering, and AI Alignment is the fourth podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are new, this series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across areas such as machine learning, AI safety, governance, coordination, ethics, philosophy, and psychology as they pertain to the project of creating beneficial AI. If this sounds interesting to you, we hope that you will join in the conversations by following us or subscribing to our podcasts on Youtube, SoundCloud, or your preferred podcast site/application.

In this podcast, Lucas spoke with David Pearce and Brian Tomasik. David is a co-founder of the World Transhumanist Association, currently rebranded Humanity+. You might know him for his work on The Hedonistic Imperative, a book focusing on our moral obligation to work towards the abolition of suffering in all sentient life. Brian is a researcher at the Foundational Research Institute. He writes about ethics, animal welfare, and future scenarios on his website “Essays On Reducing Suffering.”

Topics discussed in this episode include:

-What metaethics is and how it ties into AI alignment or not
-Brian and David’s ethics and metaethics
-Moral realism vs antirealism
-Emotivism
-Moral epistemology and motivation
-Different paths to and effects on AI alignment given different metaethics
-Moral status of hedonic tones vs preferences
-Can we make moral progress and what would this mean?
-Moving forward given moral uncertainty”