THE FUTURE IS HERE

Scott Aaronson Talks AI Safety

Hosted by Effective Altruism at UT Austin

See a transcript of the video, as well as some comments, here: https://scottaaronson.blog/?p=6823

This June, UT CS professor Scott Aaronson made the big decision to take a year-long leave from UT to work at OpenAI on AI safety. But what is AI safety, and why is it so important? AI safety can be broadly defined as “the endeavor to ensure that AI is deployed in ways that do not harm humanity.” As AI capabilities develop, humanity faces increasing risks from unaligned AI agents, malevolent users, AI arms races, and catastrophic accidents. In this special talk, Dr. Aaronson discusses AI safety, why it matters, and his current work and thoughts on the field.

This talk was given at UT Austin on November 14, 2022.

Introduction to Effective Altruism 00:00
Scott Aaronson Talks AI Safety 04:23
Q&A 1:02:58