NIST AI Risk Management Framework & Generative AI | Lunchtime BABLing 36
🎙️ NIST AI Risk Management Framework & Generative AI | Lunchtime BABLing 36
How do we govern AI in the age of ChatGPT? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown and VP of Sales Bryan Ilg break down the NIST AI Risk Management Framework—and its new generative AI profile. From misinformation to AI assurance, discover what these voluntary guidelines mean for risk, trust, and real-world adoption.
👉 Lunchtime BABLing listeners can save 20% on all BABL AI courses with code BABLING20
📚 Enroll Today: https://babl.ai/courses/
🌐 Visit BABL AI: https://babl.ai/
📩 Subscribe to The Algorithmic Bias Lab mailing list: https://www.algorithmicbiaslab.com/
🔗 Follow BABL AI: https://linktr.ee/babl.ai
⏱️ Chapters
00:00 – Intro
01:15 – What is the NIST AI Risk Management Framework and the Generative AI Profile?
03:16 – What is generative AI?
06:28 – How does generative AI affect misinformation/disinformation?
13:18 – How do we get companies to adopt the NIST AI RMF?
16:33 – Will the NIST AI RMF ever become required by law?
19:11 – Building trust and improving the bottom line
20:34 – Can NIST compliance be like SOC 2?
27:48 – What is AI assurance?
31:27 – US AI Safety Institute Consortium work
36:00 – What is NIST?
39:13 – How do you start implementing the NIST AI RMF?
📌 What You’ll Learn
🔵 The core components of the NIST AI Risk Management Framework
🔵 How the Generative AI Profile addresses emerging risks like misinformation
🔵 The link between AI assurance, trust-building, and business performance
🔵 Why voluntary frameworks like the NIST AI RMF could influence future regulations
💬 Join the Conversation
👍 Like this video if you care about responsible AI governance
🔔 Subscribe to Lunchtime BABLing for weekly AI governance insights
💬 Drop your questions in the comments—we respond!
🔖 Keywords
NIST AI Risk Management Framework, generative AI, AI governance, AI risk, misinformation, AI assurance, responsible AI, AI compliance
#AI #GenerativeAI #NISTAI
🎙️ NIST AI Risk Management Framework & Generative AI | Lunchtime BABLing 36
How do we govern AI in the age of ChatGPT? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown and VP of Sales Bryan Ilg break down the NIST AI Risk Management Framework—and its new generative AI profile. From misinformation to AI assurance, discover what these voluntary guidelines mean for risk, trust, and real-world adoption.
👉 Lunchtime BABLing listeners can save 20% on all BABL AI courses with code BABLING20
📚 Enroll Today: https://babl.ai/courses/
🌐 Visit BABL AI: https://babl.ai/
📩 Subscribe to The Algorithmic Bias Lab mailing list: https://www.algorithmicbiaslab.com/
🔗 Follow BABL AI: https://linktr.ee/babl.ai
⏱️ Chapters
00:00 – Intro
01:15 – What is the NIST AI Risk Management Framework and the Generative AI Profile?
03:16 – What is generative AI?
06:28 – How does generative AI affect misinformation/disinformation?
13:18 – How do we get companies to adopt the NIST AI RMF?
16:33 – Will the NIST AI RMF ever become required by law?
19:11 – Building trust and improving the bottom line
20:34 – Can NIST compliance be like SOC 2?
27:48 – What is AI assurance?
31:27 – US AI Safety Institute Consortium work
36:00 – What is NIST?
39:13 – How do you start implementing the NIST AI RMF?
📌 What You’ll Learn
🔵 The core components of the NIST AI Risk Management Framework
🔵 How the Generative AI Profile addresses emerging risks like misinformation
🔵 The link between AI assurance, trust-building, and business performance
🔵 Why voluntary frameworks like the NIST AI RMF could influence future regulations
💬 Join the Conversation
👍 Like this video if you care about responsible AI governance
🔔 Subscribe to Lunchtime BABLing for weekly AI governance insights
💬 Drop your questions in the comments—we respond!
🔖 Keywords
NIST AI Risk Management Framework, generative AI, AI governance, AI risk, misinformation, AI assurance, responsible AI, AI compliance
#AI #GenerativeAI #NISTAI