Mozilla Explains: Bias in AI Training Data

How can artificial intelligence be biased? Bias in artificial intelligence is when a machine gives consistently different outputs for one group of people when compared to another. Typically these biased outputs follow classic human societal biases like race, gender, biological sex, nationality, or age.

Biases can be as a result of assumptions made by the engineers who developed the AI, or they can be as a result of prejudices in the training data that taught the AI, which is what Johann Diedrick explains in the latest edition of Mozilla Explains.

Learn more about Diedrick’s project, Dark Matters:

Featured in this video, Survival of the Best Fit is a Mozilla Creative Media awardee built by Jihyun Kim, Gábor Csapo, Miha Klasinc, and Alia ElKattan. Experience it here: