THE FUTURE IS HERE

How Dangerous is Artificial Superintelligence?

Researchers around the world are developing methods to enhance the safety, security, explainability, and beneficiality of “narrow” artificial intelligence. How well will these methods transfer into the forthcoming field of artificial superintelligence (ASI)? Or will ASI pose radically new challenges that will require a different set of solutions? Indeed, what hope is there for human designers to keep control of ASI when it emerges? Or is the subordination of humans to this new force inevitable?

These are some of the questions addressed in this London Futurists webinar by Roman Yampolskiy, Professor of Computer Science at the University of Louisville. Dr. Yampolskiy is the author of the book “Artificial Superintelligence: A Futuristic Approach” and the editor of the book “Artificial Intelligence Safety and Security”. See http://cecs.louisville.edu/ry/

The meeting was introduced and moderated by David Wood, Chair of London Futurists.

For more information about this event and the speaker, see https://www.meetup.com/London-Futurists/events/282936592/

The survey on “The Rise and Implications of Artificial General Intelligence (AGI)”, mentioned during the discussion, can be accessed (until 22nd February) at https://www.surveymonkey.co.uk/r/LFAGI