Share it with your friends Like

Thanks! Share it with your friends!

Close

Check out my collab with “Above the Noise” about Deepfakes: https://www.youtube.com/watch?v=Ro8b69VeL9U
Today, we’re going to talk about five common types of algorithmic bias we should pay attention to: data that reflects existing biases, unbalanced classes in training data, data that doesn’t capture the right value, data that is amplified by feedback loops, and malicious data. Now bias itself isn’t necessarily a terrible thing, our brains often use it to take shortcuts by finding patterns, but bias can become a problem if we don’t acknowledge exceptions to patterns or if we allow it to discriminate.

Crash Course is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios

Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse

Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:

Eric Prestemon, Sam Buck, Mark Brouwer, Efrain R. Pedroza, Matthew Curls, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, DAVID NOE, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore

Want to find Crash Course elsewhere on the internet?
Facebook – http://www.facebook.com/YouTubeCrashCourse
Twitter – http://www.twitter.com/TheCrashCourse
Tumblr – http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids

#CrashCourse #ArtificialIntelligence #MachineLearning

Buy/Stream:

Comments

Frankie *** says:

Hideously relevant rn

byongcheol Ko says:

2:25 is there anybody knows the title of article?

StarCoinHero says:

6:52 This is why you don’t leave John Green Bot alone online. John Green Bot just saying the n word every 5 seconds.

H Lam says:

Algorithms like when YouTube give some video an ❌ to demonetized the video when it is talking about the truth.

Mark_till Till says:

AI has the power to destroy peoples lives. It has no conscience.

Athena Caesura says:

Love the nonbinary inclusion remark. PBS on the whole isn't great with gender inclusive language and I didn't expect crash course AI to be the pioneer, but I sure appreciate it.

Saul Galloway says:

6 agreements and disagreements.

1. Nurses are 90% female. Programmers are 80% male. Of course you're going to have far more images on average of the dominate sex in those fields. But, sure. Get it to say THEY.

2. The only value understanding gender has is significant behavioral predictions. Algorithm doesn't care about your social Yugioh game to feel special. It's tackling reality.

3. Lack of data on the racial bit. For sure we need greater data samples there.

4. We're gonna' ignore uncomfortable crime stats? Ok.

5. Yes. The kids who are shown to do well often are at a much lesser risk of becoming shitty. Reality sure is complicated.

6. Yes. You can't discriminate when it comes to loans and jobs. Even if there's a significant racial, sex, whatever difference. Things can't change for the better if you force them out and skillful/valuable individuals that aren't part of the problem within' these groups would suffer.

Ramon Luque says:

Trash, algorithms tell facts, they ones who are biassed are people with that "equlity" ideology. Races are not equal, peopple from defferent ages have different midnsets, there is only men and women and they also have different nature. The AI shows it by its results.

Power Play says:

Can’t wait for crash course white supremecists

Mr. Wallet says:

OK, I was kind of dreading this one because I expected a bunch of woke drivel – but I gotta be honest, you folks pretty much nailed it. This was informative, and probably as even-handed as Crash Course has ever been on such a sensitive topic. I am impressed.

D Murphy says:

Many people are missing the point to the Google analogy. AI hiring systems will learn associated characteristics of a nurse or programmer or what have you from similar datasets. That's not so much the problem- it's what happens next. It discriminates against people who don't meet the average characteristics. The AI system may throw out a resume for a nursing position that has the words "Boy Scout troop leader" because that's not something associated with the average nurse. It may throw out qualified programmer resumes from people who attended HBCUs, because most programmers haven't. If you don't quite get this, please look up the scrapped Amazon AI hiring program. It downgraded resumes from applicants who attended women's colleges.

gunsandcarsandstuff says:

Should a program be faulted for showing mostly female nurses? 91% of nurses are female. Should it be faulted for recognizing more white people? The United States is 72% Caucasian. It seems silly that we try to tell computers lies, so that their results don’t hurt anyone’s feelers.

Shukla Maths Academy says:

Very nice crash course.👍🏻

Hambone says:

Please do a complete course on climate science!!

Karl Ramstedt says:

3:28 omg, that's the jealous girlfriend from the stock-photo meme.

Cody Uhi says:

Are there really deep learning models that implement a person's name as a factor to extrapolate their personality traits or compatibility for a job? Are there any studies that show that a person's given name has a significant correlation to their personality?

Łił Bīłł says:

HOW IS HIS HAT THAT BIG?!?!

Susan Maddison says:

A fundamentally dishonest video. No mention of the ideological bias that is almost unanimous in silicon valley, obviously is going to infect the algorithms. and is the prima facie cause of the discrimination so many conservatives have noticed in social media and on youtube right here. No mention of the obstinate denials of this obvious reality by the tech companies, rather than trying seriously to deal with it by hiring enough non-leftists who would be able to recognize it and help them police it. No mention of the refusal of these companies and their personnel to acknowledge their own bias, the first step toward policing. No mention of their firing of people who point out the problem. The failure here to mention this, on a supposedly scientific video here, is itself a confirmation of how serious the problem is.

blckboy 71 says:

bruh my name is Jibril

Leen Jabri لين جبري says:

THIS IS FANTASTIC! YOU ARE ENCOURAGING ME TO KEEP WORKING ON MY CHANNEL!.
GUYS I WOULD APPRECIATE YOUR HELP IF YOU COME AND CHECKED OUT MY CHANNEL!!!.

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×