Marietje Schaake, Dutch member of the European Parliament (2009-19) Eric Schmidt, Technical Advisor, Alphabet Inc.
As research around the world proceeds to improve the power, the scope, and the generality of AI systems, should developers adopt regulatory frameworks to help steer progress? What are the main threats that such regulations should be guarding against? In the midst of an intense international race to obtain better AI, are such frameworks doomed to be ineffective? Might such frameworks do more harm than good, hindering valuable innovation? Are there good examples of precedents, from other fields of technology, of international agreements proving beneficial? Or is discussion of frameworks for the governance of AGI (Artificial General Intelligence) a distraction from more pressing issues, given the potential long time scales ahead before AGI becomes a realistic prospect? This 90 minute London Futurists live Zoom webinar featured a number of panellists with deep insight into the issues of improving AI: *) Joanna Bryson, Professor of Ethics and Technology at the Hertie School, Berlin, *) Dan Faggella, CEO and Head of Research, Emerj Artificial Intelligence Research, *) Nell Watson, tech ethicist, machine learning researcher, and social reformer, The webinar took place from 4pm UK time on Saturday 30th May. The video resolution is low, but the quality of the panellists contributions shines through. For more information about this event, see Join London Futurists on Meetup at The following links give more information about items mentioned during the discussion, or were placed into the text chat by participants: *) *) *) *) *) [More]
There’s a false narrative surrounding artificial intelligence (AI): that it cannot be regulated. These idea stems, in part, from a belief that regulations will stifle innovation and can hamper economic potential, and that the natural evolution of AI is to grow beyond its original code. In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with Joanna J. Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin (beginning February 2020). Professor Bryson begins by explaining the difference between intelligence and AI, and how that foundational understanding can help us to see how regulations are possible in this space. “We need to be able to go back then and say, ‘okay, did you file a good process?’ A car manufacturer, they’re always recording what they did because they do a phenomenally dangerous and possibly hazard thing … and if one of them goes wrong and the brakes don’t work, we can go back and say, ‘Why did the brakes not work?’ And figure out whose fault [it] is and we can say, ‘Okay, you’ve got to do this recall. You’ve got to pay this liability, whatever.’ It’s the same thing with software,” Bryson explains. It is the responsibility of nations to protect those inside its borders, and that protection must extend to data rights. She discusses how the EU General Data Protection Regulation — a harmonized set of rules that covers a large area and crosses borders — is an example international cooperation [More]