Original post: https://www.gcppodcast.com/post/episode-114-machine-learning-bias-and-fairness-with-timnit-gebru-and-margaret-mitchell/

This week, we dive into machine learning bias and fairness from a social and technical perspective with machine learning research scientists Timnit Gebru from Microsoft and Margaret Mitchell (aka Meg, aka M.) from Google. They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going.

“Man is to computer programmer as woman is to____”

Common sense says that the missing term should be computer programmer because the term is not intrinsically gendered, unlike king and queen, but a computer with a standard word embedding system would probably complete it “Man is to computer programmer as woman is to homemaker.”

In this episode, we explain how our unconscious biases can be passed down to machine learning algorithms.

Read more at https://go.unbabel.com/blog/gender-bias-artificial-intelligence/

Illustration, Animation and Sound Design: Favo Studio
https://vimeo.com/favostudio

►►

Every day it seems like machines learn more and more and the content we consume says less and less. That’s why we’re building Understanding with Unbabel — a deeply human take on language, artificial intelligence, and the way they’re transforming customer experience.

About Unbabel
At Unbabel, we believe language shouldn’t stand in the way of relationships. By combining human expertise and artificial intelligence, we give businesses and their customers the ability to understand each other, make smarter choices, and have richer experiences.

Follow us on:
Facebook: https://www.facebook.com/unbabel/
Twitter: https://twitter.com/Unbabel
Linkedin: https://www.linkedin.com/company/unbabel/
Instagram: https://instagram.com/unbabel/

Professor Christopher Manning, Stanford University & Margaret Mitchell, Google AI
http://onlinehub.stanford.edu/

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)

To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs224n/index.html#schedule

To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html

To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu

The programmes behind artificial intelligence are in almost every part of our lives, but there’s an emerging problem with them: algorithmic bias.

Subscribe: http://trt.world/subscribe
Livestream: http://trt.world/ytlive
Facebook: http://trt.world/facebook
Twitter: http://trt.world/twitter
Instagram: http://trt.world/instagram
Visit our website: http://trt.world

How to avoid AI hiring AI recruiting BIAS?
4 ways to avoid discriminating AI hiring?
AI Recruiting Trends in 2019
Algorithm hiring causes AI Hiring Bias

Artificial intelligence for Everyone.

Everything about Applied Artificial Intelligence, Machine Learning in real world.

Mind Data Intelligence is Brian Ka Chan – Applied AI Strategist, Technology/Data/Analytics Executive, ex-Oracle Architect, ex-SAP Specialist. “Artificial intelligence for Everyone” is my vision about the channel. And it will also include fintech, smart cities, and all latest cutting edge technologies.

The goal of the channel to sharing AI & Machine Learning knowledge, expand common sense, and demystify AI Myths. We want everyone from all level of walks to understand Artificial Intelligence.
http://TrustifyAI.com

Best Artificial Intelligence Videos

Twitter: https://twitter.com/MindDataAI
Linkedin: https://www.linkedin.com/in/briankachan
Linkedin Page: https://www.linkedin.com/company/mind-data-artificial-intelligence/

AI Strategy Leader Brian Ka Chan is also the author of “Taming Artificial Intelligence: Mind-as-a-Service: The Actionable Human-Centric AI Evolution Blueprint for Individuals, Business”

Discover how CSP enables you to modernize your applications no matter where they are. Plus, learn how to undo human bias at scale with KubeFlow.

Watch more:
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution. Learn how, by watching this video!

#MachineLearning #ODSC #DataScience #AI

Do You Like This Video? Share Your Thoughts in Comments Below
Also, You can visit our website and choose the nearest ODSC Event to attend and experience all our Trainings and Workshops:
odsc.com/california
odsc.com/london

At Google researchers and engineers, have a goal to make machine learning technology work for everyone.

We think that machines can be objective because they don’t worry about human emotion. Even though that’s the case, AI (artificial intelligence)
systems may show bias because of the data that is used to train them. We have to be aware of this and correct for it.

Filmmaker Robin Hauser is a proven storyteller of complex topics. In her award-winning documentary, “Code—Debugging the Gender Gap” she examined the dearth of women in computer coding.

Now, in her latest film, “Bias”, Robin posits compelling questions: how have primal human survival instincts made racial and gender bias an innate part of ourselves; and with the rise of machine learning, with increasing reliance on AI, can we protect Artificial Intelligence from our inherent biases? Her film is an engrossing exploration and clarion call that will frighten and also enlighten.

Today’s Guest: Robin Hauser
@biasfilm
https://www.biasfilm.com/

Interviewer: Jim Kamp
http://polychromemedia.com/jameskamp/
@kampjames

Presentation was hold during the Oct 17, 2018 Meetup: “Biased? – About the story you didn’t know you’re telling”

https://www.meetup.com/de-DE/Artificial-Intelligence-Suisse/events/gbnvppyxlbmc/

This talk was presented at PyBay2019 – 4th annual Bay Area Regional Python conference. See pybay.com for more details about PyBay and click SHOW MORE for more information about this talk.

Description
Through a series of case studies, I will illustrate different types of algorithmic bias, debunk common misconceptions, and share steps towards addressing the problem.

Original slides: https://t.ly/9gO5k

About the speaker
Rachel Thomas is a professor at the University of San Francisco Data Institute and co-founder of fast.ai, which created the “Practical Deep Learning for Coders” course that over 200,000 students have taken and which has been featured in The Economist, MIT Tech Review, and Forbes. She was selected by Forbes as one of 20 Incredible Women in AI, earned her math PhD at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, she shares what scares her about AI and why we need people from all backgrounds involved with AI.

Sponsor Acknowledgement
This and other PyBay2019 videos are via the help of our media partner AlphaVoice (https://www.alphavoice.io/)!

#pybay #pybay2019 #python #python3 #gdb

Machine learning is notorious for reinforcing existing bias, but it can also be used to counteract it. Primer developed an ML system called Quicksilver that identifies and describes notable women of science who are missing from Wikipedia. It takes millions of news documents as input and generates first-draft biographical articles.

Kubeflow made it far easier to scale and maintain this system. Come learn how Primer partnered with Google to migrate to GCP in order to:

* Deploy the same code to multiple physical environments
* Affordably scale an existing ML app using Kubeflow with auto-provisioning
* Continuously train hundreds of thousands of models using Kubeflow pipelines

Watch more:
Next ’19 ML & AI Sessions here → https://bit.ly/Next19MLandAI
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions

Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

Speaker(s): John Bohannon, Michelle Casbon

Session ID: MLAI206
product:Compute Engine;

Machine learning is notorious for reinforcing existing bias, but it can also be used to counteract it. Primer developed an ML system called Quicksilver that identifies and describes notable women of science who are missing from Wikipedia. It takes millions of news documents as input and generates first-draft biographical articles.

Kubeflow made it far easier to scale and maintain this system. Come learn how Primer partnered with Google to migrate to GCP in order to:

* Deploy the same code to multiple physical environments
* Affordably scale an existing ML app using Kubeflow with auto-provisioning
* Continuously train hundreds of thousands of models using Kubeflow pipelines

Watch more:
Next ’19 ML & AI Sessions here → https://bit.ly/Next19MLandAI
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions

Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

Speaker(s): John Bohannon, Michelle Casbon

Session ID: MLAI206
product:Compute Engine;

This keynote introduces the first AI based on Unsupervised Learning. Automatically discover insights in your data, avoid human bias, and empower your team with the power to discover previously hidden insights that they can use to revolutionize your business.

What will a machine-dominated future look like? Fresh? Maybe funky? Colorful and fun? When we get to a point where AI and robots are conscious, we can only wonder if their dreams and imaginations will operate anything like our own.

We often get lost in our own thought processes, often times biassing ourselves from past experiences. But when we take our imperfect selves and build AI’s and intelligences beyond neurons, it’s hard to think of a logic flow without any type of bias. It’s impossible to separate ourselves from our biases, so if a computer can do it with a flip of a switch, think of all the implications it can have. On criminal justice, on decision making, on investing, and perhaps most importantly, in shaping public policy for the world.

Video original sources:
+ Motion Graphic COOL Design (Cinema 4D) 2018 (https://youtu.be/Bpge5OmKrS8?t=21)
+ Lusine – Just A Cloud (https://youtu.be/10Jg_25ytU0?t=130)
+ Machine Learning and Human Bias (https://www.youtube.com/watch?v=59bMh59JQDo)
+ THE COLOR OF LOVE (https://vimeo.com/298592171)

The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.

Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance – all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.

Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society’s divides, such as the digital divide or income inequality?

This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields:

Keynotes:

– Prof. Moshe Vardi: “Deep Learning and the Crisis of Trust in Computing”
– Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI”

Panelists: Ethics and Bias in AI

– Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University
– Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien
– Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna
– Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna
– Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG

Moderator: Markus Mooslechner, Terra Mater Factual Studios

The evening will be complemented by networking & discussions over snacks and drinks.

More details: http://www.aiethics.cisvienna.com

review.chicagobooth.edu | As artificial intelligence is used more and more to aid—or sometimes replace—human decision-making, how worried should we be that these applications will automate bias against particular groups? Chicago Booth’s Sendhil Mullainathan says we should be alert to this concern, but also aware of one of AI’s great advantages: while it is subject to biases in data, it doesn’t apply its own layer of bias, as humans tend to. Although completely unbiased data is almost impossible to generate, Mullainathan says that designers of AI systems can at least be alert to such biases and create with those concerns in mind. And while it’s reasonable to be concerned about AI perpetuating biases in contexts such as hiring or criminal justice, it’s just these sorts of places, Mullainathan says—the places we’re most concerned about bias skewing decision-making—that AI has the most potential to improve equity.

Dee Smith of Strategic Insight Group sits down with Raoul Pal to discuss the confluence of behavioral economics and technology. The principles of behavioral economics combined with machine learning and algorithms can lead to amazing results, but what happens when human bias bleeds into the very algorithms we believe protect us from it? This video is excerpted from a piece published on Real Vision on September 7, 2018 entitled “Modern Manipulation: Behavioral Economics in a Technological World.”

Watch more Real Vision™ videos: http://po.st/RealVisionVideos
Subscribe to Real Vision™ on YouTube: http://po.st/RealVisionSubscribe
Watch more by starting your 14-day free trial here: https://rvtv.io/2wcQFLN

About Future Fears:
What’s coming that we should all be worried about? What keeps the world s greatest investors up at night? Household names of finance discuss the terrifying potential risks posed by artificial intelligence, the rise of social media, autonomous vehicles and more.

About Real Vision™:
Real Vision™ is the destination for the world’s most successful investors to share their thoughts about what’s happening in today’s markets. Think: TED Talks for Finance. On Real Vision™ you get exclusive access to watch the most successful investors, hedge fund managers and traders who share their frank and in-depth investment insights with no agenda, hype or bias. Make smart investment decisions and grow your portfolio with original content brought to you by the biggest names in finance, who get to say what they really think on Real Vision™.

Connect with Real Vision™ Online:
Twitter: https://rvtv.io/2p5PrhJ
Instagram: https://rvtv.io/2J7Ddlw
Facebook: https://rvtv.io/2NNOlmu
Linkedin: https://rvtv.io/2xbskqx

Technology, Incentives & Cognitive Bias (w/ Dee Smith & Raoul Pal)
https://www.youtube.com/c/RealVisionTelevision

Transcript:
For the full transcript: https://rvtv.io/2wcQFLN
There is an emerging narrative that is claiming that computer programs do have
biases, and the biases are based on the people who write programs.
One of the interesting topics in that, of course, confirmation bias is one of the most deadly of the
behavioral economic biases that we look to find information that confirms what we already
believe, instead of finding information that could falsify it, which is what scientific method is based
on, is that you could try to falsify. You don’t try to verify.
That the whole American adventure in Iraq based on the intelligence finding that there were
weapons of mass destruction, which was in large part based on ignoring evidence that there
weren’t. It was simply selective use of intelligence, which is confirmation bias. It can be incredibly
problematic. But it’s something that is so, there’s some kind of a switch that flips and you decide,
oh, it’s an aha moment. I see it, I got it, I understand it now. Let me find all these things that tell me
I’m right.
RAOUL PAL: Because humans are so delusional. I mean, I fall into that bias all the time, as
everybody does. And this is why the machine is so powerful and why we have to be actually truly
concerned. Not flippantly concerned, but truly concerned, because there is no bias. And it’s the
massive ability to process data in ways that the human brain can’t.
We can process incredible data. Everything we’re seeing now and all the colors. Machines are
nowhere near that, nowhere near our cognitive abilities in certain ways. We cannot process a
fixed amount of, a fixed type of data in the quantity that machines can without a bias, because we
need patterns to fill in the blanks.

When we train AI systems using human data, the result can be human biased. Attend the webinar to know more.

A must attend webinar for software test engineers who want to learn about AI and software testing.

Webinar Date: 25 Feb 2019, 11am Pacific Time
******
URL: https://sqaweb.link/webinar678
******
We would like to think that AI-based machine learning systems always produce the right answer within their problem domain. In reality, their performance is a direct result of the data used to train them. The answers in production are only as good as that training data.

Data collected by a human such as surveys, observations, or estimates, can have built-in human biases. Even objective measurements can be measuring the wrong things or can be missing essential information about the problem domain.

The effects of biased data can be even more deceptive. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion.

This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision.

This webinar will explain:

1.How AI systems can suffer from the same biases as human experts
2.How that could lead to biased results
3.Examine how testers, data scientists, and other stakeholders can develop test cases to recognise biases, both in data and the resulting system
4.Ways to address those biases

Attendees will gain a deeper understanding of:

1.How data influences
2.How machine learning systems make decisions
3.How selecting the wrong data, or ambiguous data, can bias machine learning results
4.Why we don’t have insight into how machine learning systems make decisions
5.How we can identify and correct bias in machine learning systems

Speaker: Peter Varhol, Software Strategist & Evangelist

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too.

Computer scientist Joanna Bryson thinks we can understand how human bias is learned by taking a closer look at how AI bias is learned.

Bryson’s computer science research is going beyond the understanding that our AI has a bias problem by questioning how bias is formed at all — not just in the technology in machine brains, but in our human brains too.

When reading up on artificial neural networks, you may have come across the term “bias.” It’s sometimes just referred to as bias. Other times you may see it referenced as bias nodes, bias neurons, or bias units within a neural network. We’re going to break this bias down and see what it’s all about.

We’ll first start out by discussing the most obvious question of, well, what is bias in an artificial neural network? We’ll then see, within a network, how bias is implemented. Then, to hit the point home, we’ll explore a simple example to illustrate the impact that bias has when introduced to a neural network.

Checkout posts for this video:
https://www.patreon.com/posts/18290447
https://www.instagram.com/p/BhxuRXhlGpS/?taken-by=deeplizard
https://twitter.com/deeplizard/status/987163658391293952
https://steemit.com/deep-learning/@deeplizard/bias-in-an-artificial-neural-network-explained-or-how-bias-impacts-training

💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥

👀 OUR VLOG:
🔗 https://www.youtube.com/channel/UC9cBIteC3u7Ee6bzeOcl_Og

👉 Check out the blog post and other resources for this video:
🔗 https://deeplizard.com/learn/video/HetFihsXSys

💻 DOWNLOAD ACCESS TO CODE FILES
🤖 Available for members of the deeplizard hivemind:
🔗 https://www.patreon.com/posts/27743395

🧠 Support collective intelligence, join the deeplizard hivemind:
🔗 https://deeplizard.com/hivemind

🤜 Support collective intelligence, create a quiz question for this video:
🔗 https://deeplizard.com/create-quiz-question

🚀 Boost collective intelligence by sharing this video on social media!

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:
yasser
Prash

👀 Follow deeplizard:
Our vlog: https://www.youtube.com/channel/UC9cBIteC3u7Ee6bzeOcl_Og
Twitter: https://twitter.com/deeplizard
Facebook: https://www.facebook.com/Deeplizard-145413762948316
Patreon: https://www.patreon.com/deeplizard
YouTube: https://www.youtube.com/deeplizard
Instagram: https://www.instagram.com/deeplizard/

🎓 Other deeplizard courses:
Reinforcement Learning – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xoWNVdDudn51XM8lOuZ_Njv
NN Programming – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xrfNyHZsM6ufI0iZENK9xgG
DL Fundamentals – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU
Keras – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xrwRnXk_yCPtnqqo4_u2YGL
TensorFlow.js – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xr83l8w44N_g3pygvajLrJ-
Data Science – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xrth-Cqs_R9-
Trading – https://deeplizard.com/learn/playlist/PLZbbT5o_s2xr17PqeytCKiCD-TJj89rII

🛒 Check out products deeplizard recommends on Amazon:
🔗 https://www.amazon.com/shop/deeplizard

📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard’s link:
🔗 https://amzn.to/2yoqWRn

🎵 deeplizard uses music by Kevin MacLeod
🔗 https://www.youtube.com/channel/UCSZXFhRIx6b0dFX3xS8L1yQ
🔗 http://incompetech.com/

❤️ Please use the knowledge gained from deeplizard content for good, not evil.

Meet the amazing tribe of women behind “bias,” a documentary film that highlights the nature of implicit bias and the grip it holds on our society. Director Robin Hauser is joined at the Tribe Table by producers Christie Herrie and Tierney Henderson, and film subject * Professor, Lois James to talk to Amy about unconscious and implicit bias and how it relates to gender and race, coming to terms with our own unconscious biases, and Harvard’s “Implicit Association Test”. The film explores bias through all walks of life: from CEOs and police enforcement to professional soccer player, Abby Wambach. With the toxic effect of bias making headlines every day, the time to talk about “bias” is now. Watch the trailer: https://www.imdb.com/title/tt7137804/?ref_=ttpl_pl_tt

To read more about the Implicit Association Test and unconscious bias: https://implicit.harvard.edu/implicit/faqs.html

Watch Robin Hauser’s TED talk: “Can we protect AI from our biases?”: https://www.ted.com/talks/robin_hauser_can_we_protect_ai_from_our_biases/up-next

AI algorithms make important decisions about you all the time — like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Get TED Talks recommended just for you! Learn more at https://www.ted.com/signup.

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter: http://www.twitter.com/TEDTalks
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: https://www.youtube.com/TED

As researchers and engineers, our goal is to make machine learning technology work for everyone.