Dr. Joy Buolamwini reflects on decoding algorithmic bias and the future of AI
Dr. Joy Buolamwini, whose work brings together art, culture, and technology, discusses how an unexpected barrier to completing a creative project at the Media Lab led her to investigate bias in AI systems and found the Algorithmic Justice League. "I was working on an art project and having fun," she says, when she discovered that the facial recognition software she was using for the project couldn't see her face. It could, however, see the white mask she donned to test it. "It was that moment of seeing the white mask being detected as a human face, while my actual human face wasn't detected that made me pause the art project and say, 'Wait a second.'" That moment led her the Gender Shades project, which piloted an intersectional approach to inclusive product testing for AI.
Since Dr. Buolamwini began this work, nearly a decade ago, AI systems have become increasingly ubiquitous, but so has public awareness of the issues her research helped to uncover. "I feel the work is even more relevant than when I started the Gender Shades project," she says.
As societies around the world grapple with the implications of generative AI, deepfakes, and other emerging technologies, Dr. Buolamwini says, it's important to consider not only how to improve the systems we use, but to decide what technologies we want to live with.
More information at: https://www.media.mit.edu/people/joyab/overview/
Dr. Joy Buolamwini, whose work brings together art, culture, and technology, discusses how an unexpected barrier to completing a creative project at the Media Lab led her to investigate bias in AI systems and found the Algorithmic Justice League. “I was working on an art project and having fun,” she says, when she discovered that the facial recognition software she was using for the project couldn’t see her face. It could, however, see the white mask she donned to test it. “It was that moment of seeing the white mask being detected as a human face, while my actual human face wasn’t detected that made me pause the art project and say, ‘Wait a second.'” That moment led her the Gender Shades project, which piloted an intersectional approach to inclusive product testing for AI.
Since Dr. Buolamwini began this work, nearly a decade ago, AI systems have become increasingly ubiquitous, but so has public awareness of the issues her research helped to uncover. “I feel the work is even more relevant than when I started the Gender Shades project,” she says.
As societies around the world grapple with the implications of generative AI, deepfakes, and other emerging technologies, Dr. Buolamwini says, it’s important to consider not only how to improve the systems we use, but to decide what technologies we want to live with.
More information at: https://www.media.mit.edu/people/joyab/overview/