Is AI as biased as humans are? | Ars Technica

Share it with your friends Like

Thanks! Share it with your friends!


Researchers at the Center for Information Technology Policy (CITP/Princeton University) try to answer the question “is artificial intelligence as biased as humans are?”.

Connect with Ars Technica:
Follow Ars Technica on Facebook:
Follow Ars Technica on Google+:
Follow Ars Technica on Twitter:

Is AI as biased as humans are? | Ars Technica


princeofexcess says:

The AI learns language based on texts that are written by humans. If the texts contain biases then the AI will learn those biases. Additionally if the data contains biases the AI will also learn those biases.
This means that if on average woman behave a certain way as compared to men the AI will pick this up.
The conclusions are only incorrect if the AI learns from biased data.

vallab says:

BIAS, MANY TIMES BEGIN WITH THOSE WHO THINK OTHERS ARE BIAS! If people consider the Artificial Intelligent robots are biased, it is the fault of the people who feed their biased data into the AI machine. That makes AI socially biased and not individually or personally as the people are most of the time.
Therefore, people should thank for the AI because even the biased data fed robots are far more fair or unbiased than the most fair humans.

Of course, we need to correct the AI if the data fed into the it is biased but before which we should check the people who are complaining biases in order to impose their politically correct, "morality" views into these AI machines.

Pedro Roque says:

what is this crap? even i can use google translate

Write a comment