Computing human bias with AI technology

Share it with your friends Like

Thanks! Share it with your friends!

Close

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too.

Computer scientist Joanna Bryson thinks we can understand how human bias is learned by taking a closer look at how AI bias is learned.

Bryson’s computer science research is going beyond the understanding that our AI has a bias problem by questioning how bias is formed at all — not just in the technology in machine brains, but in our human brains too.

Comments

Yadisf Haddad says:

Could you please link to the scientific paper with the research of Mrs. Bryson and also the bias test? really interesting topic, it relates to the research I'm currently procrastinating with this video.

alaysia kaye butler says:

garbage in garbage out; chatbots were subjected to the worst some people had to offer

Jim Frans says:

imho, i don't think this sense in my brain would be very useful in the future since there'd be plenty of technologies that would help me with directions, except (perhaps) when i am in a place where those technologies could not reach me.

instead, i think it would be better if they make the same technology that could enhance my sense of time.
it'd be great if i could always know how long i've been doing a certain activity, so i could always control my sense of time.

Ernst Jünger says:

I fail to see how associating nursing as a profession with woman is bias. It's a logical outcome of the fact that woman vastly outnumber men in the profession. And probably always will because they are more biologically predisposed towards maternalism.

Z51X77 says:

"But even if technology can’t fully solve the social ills of institutional bias and prejudicial discrimination, the evidence reviewed here suggests that, in practice, it can play a small but measurable part in improving the status quo. This is not an argument for algorithmic absolutism or blind faith in the power of statistics. If we find in some instances that algorithms have an unacceptably high degree of bias in comparison with current decision-making processes, then there is no harm done by following the evidence and maintaining the existing paradigm. But a commitment to following the evidence cuts both ways, and we should to be willing to accept that — in some instances — algorithms will be part of the solution for reducing institutional biases. So the next time you read a headline about the perils of algorithmic bias, remember to look in the mirror and recall that the perils of human bias are likely even worse."

Source: https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms

garet claborn says:

all neural networks are biased, both digital and biological. this is a good thing, and a major aspect of making minds possible

Sheesh says:

oh no our judgements map with reality

D Man says:

Why not make a load of different a.Is with different biases. Then get them to talk to each other.

DJ Programer says:

So they made a social/psychological mirror of the route total.

It's miscommunication…kind of.

What we want to express Vs what we actually express

Scott says:

"Made by white guys in California" — Was white said detrimentally?

ro pro says:

I don't disagree that AI can be biased, depending on its training data. However, the IAT is a load of horse shit.

Shall NotWither says:

Really?? A biased video on bias? What was I thinking?

Write a comment

*