Over the past year, discourse about the ethical risks of machine learning has largely shifted from speculative fear about rogue super intelligent systems to critical examination of machine learning’s propensity to exacerbate patterns of discrimination in society. This talk explains how and why bias creeps into supervised machine learning systems and proposes a framework businesses can apply to hold algorithmic systems accountable in a way that is meaningful to people impacted by systems. You’ll learn why it’s important to consider bias throughout the entire machine learning product lifecycle (not just algorithms), how to assess tradeoffs between accuracy and explainability, and what technical solutions are available to reduce bias and promote fairness.
Ethics of AI Lab Centre for Ethics, University of Toronto, March 20, 2018 http://ethics.utoronto.ca Kathryn Hume intergrate.ai