THE FUTURE IS HERE

Eliezer Yudkowsky – Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.

In this episode we explore two things I’ve never seen Yudkowsky speak on at length before: (a) his specific recommendations and draft ideas of international governance, and (b) his nuanced vision of an ideal future where AGI is eventually harnessed to serve very specific human values.

Listen to this episode on The Trajectory Podcast: https://www.buzzsprout.com/2308422/episodes/16491788

See the full article from this episode: https://danfaggella.com/yudkowsky1

There are four main questions we cover in this AGI Governance series are:

1. How important is AGI governance now on a 1-10 scale?
2. What should AGI governance attempt to do?
3. What might AGI governance look like in practice?
4. What should innovators and regulators do now?

If this sounds like it’s up your alley, then be sure to stick around and connect:

— Blog: https://danfaggella.com/trajectory
— X: https://x.com/danfaggella
— LinkedIn: https://linkedin.com/in/danfaggella
— Newsletter: https://bit.ly/TrajectoryTw
— Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954