I know, the title of this talk is like saying the only way to stop a bad Terminator is to use a good Terminator but hear me out. Human biases influence the outputs of an AI model. AI amplifies bias and socio-technical harms impact fairness, adoption, safety, and the well being. These harms disproportionately affect legally protected classes of individuals and groups in the United States. It’s so fitting that this year’s theme for International Women’s day was #BreakTheBias so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstructs techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption. Noble Ackerson Former Google Developers Expert, Responsible AI @nobleackerson Mr. Ackerson is a Director of Product at Ventera Corporation focused on AI/ML and Data Science enabling responsible use of AI practices across commercial and federal clients. He also serves as President of Cyber XR where he focuses on Safety, Privacy, and Diversity intersections in XR. Noble is a Certified AI Product Manager, a Google Certified Design Sprint Master, and formally a Google Developers Expert for Product Strategy. His professional career is centered at the intersection of data ethics and emergent tech. From implementing practical data governance privacy principles, frameworks, empowering enterprises with the tools to eliminate bias and promote fairness in machine learning, Noble has pushed the limits of mobile, web, wearable, and spatial computing applications the human-centered way. ——– Sponsored by: ——– Stream is [More]
One to watch: “Design for Cognitive Bias” by the fabulous @movie_pundit of @thinkcompany. #uxconfcph #ux #ethics #behavioralscience