THE FUTURE IS HERE

Big Tech – S1E07 – Joanna J Bryson on Regulating the Software Behind Artificial Intelligence

There’s a false narrative surrounding artificial intelligence (AI): that it cannot be regulated. These idea stems, in part, from a belief that regulations will stifle innovation and can hamper economic potential, and that the natural evolution of AI is to grow beyond its original code.

In this episode of Big Tech co-hosts David Skok and Taylor Owen speak with Joanna J. Bryson, professor of ethics and technology at the Hertie School of Governance in Berlin (beginning February 2020). Professor Bryson begins by explaining the difference between intelligence and AI, and how that foundational understanding can help us to see how regulations are possible in this space.

“We need to be able to go back then and say, ‘okay, did you file a good process?’ A car manufacturer, they’re always recording what they did because they do a phenomenally dangerous and possibly hazard thing … and if one of them goes wrong and the brakes don’t work, we can go back and say, ‘Why did the brakes not work?’ And figure out whose fault [it] is and we can say, ‘Okay, you’ve got to do this recall. You’ve got to pay this liability, whatever.’ It’s the same thing with software,” Bryson explains. It is the responsibility of nations to protect those inside its borders, and that protection must extend to data rights. She discusses how the EU General Data Protection Regulation — a harmonized set of rules that covers a large area and crosses borders — is an example international cooperation that resulted in a harmonized set of standards and regulations for AI development.