Fairness in Machine Learning with Hanna Wallach – TWiML Talk #232

Share it with your friends Like

Thanks! Share it with your friends!


Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across machine learning. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Along the way, Hanna points us to a TON of papers and resources to further explore the topic of fairness in ML. You’ll definitely want to check out the notes page for this episode, which you’ll find at twimlai.com/talk/232.

We’d like to thank Microsoft for their support and their sponsorship of this series. Microsoft is committed to ensuring the responsible development and use of AI and is empowering people around the world with intelligent technology to help solve previously intractable societal challenges spanning sustainability, accessibility and humanitarian action. Learn more at Microsoft.ai.


Write a comment