THE FUTURE IS HERE

The Importance of AI Risk Management: A Fireside Chat on NIST’s AI RMF Launch

The National Institute of Standards and Technology (NIST) published its AI Risk Management Framework on January 26, providing companies with a voluntary guideline to help effectively manage AI risk. Robust Intelligence hosted a webinar with NIST research scientist Reva Schwartz, one of the key contributors to the framework, to discuss the challenge with managing AI risk and how companies can practically adopt the NIST AI RMF. Reva is joined by co-hosts from Robust Intelligence, Alie Fordyce and Hyrum Anderson.

0:00 Welcome & guest introductions
5:00 NIST’s role in non-regulatory standardization
6:00 Background on NIST AI RMF and its intended goal
10:40 How and why NIST prioritized a collaborative approach in creating the AI RMF
12:25 Background to the companion AI RMF Playbook and why it was included in the framework launch
16:05 Summary of the four core elements of the AI RMF: Govern, Map, Measure, and Manage
20:15 Better understand how businesses can get started with the AI RMF and who it applies to
22:35 Why should companies use the AI RMF? What is the longterm impact?
25:00 What are the AI risk management trends, nationally or globally, that we should pay attention to?
26:35 How is NIST making the AI RMF future proof?
28:05 A lot of this is geared towards protecting consumers of AI products. Does the AI RMF have an impact on the producers also?
29:44 (Audience question) What are the new roles we can expect to see resulting from efforts to manage AI risk?
31:12 General thoughts, from AI Bias expertise perspective, on trends emerging nationally and globally on auditing and other AI regulatory efforts
33:48 (Audience question) Is the AI RMF general enough to accommodate new and emerging applications of AI (like Generative AI)?
36:31 The need for industry- or application-specific guidelines
38:22 (Audience question) Where do you see consensus in the AI RMF approach to trustworthy AI with other global approaches and where does it take a novel approach?
39:50 (Audience question) What is the process for managing latent bias remediation in the context of the AI RMF?
42:54 The process of using the AI RMF (Map and Measure functions) causing you to questions your own decisions is part of the intended goal of the framework
44:32 Keeping the AI RMF up-to-date and how to prevent it from becoming outdated
46:25 Upcoming launch of the Trustworthy and Responsible AI Resource Center and more advanced Playbook and how to use it
48:08 Key challenge to creating the framework and key hope for what the framework will achieve
50:20 (Audience question) Where the AI RMF Playbook be found
51:10 (Audience question)Which organizations are responsible for AI security risk?
54:00 Thank you and contact information