This tutorial was recorded at KDD 2020 as a live, hands-on tutorial. The content is available at https://dssg.github.io/fairness_tutorial/

How do algorithms spread bias throughout our culture? In this talk, technology thought leader Corey Patrick White shares the dangers of algorithmic bias, and how high the stakes are for humanity. As a partner and senior vice president at Future Point of View, Corey Patrick White is tasked with helping leaders look out into the future and anticipate how technology will impact their organizations and themselves. He is especially focused on the role that machine intelligence will play in almost every aspect of life: from the decisions we make, to the professions we undertake, to how we interact with the world and each other. Corey began his career as a journalist before joining Future Point of View. As a journalist, he developed investigative skills as well as a desire to understand complicated topics and to explain those topics in a way that everyone can understand. He brings these skills to the speaking stage, offering insights into how complex innovation is dramatically altering the world we live in ways that can be both positive and negative. As a partner and senior vice president at Future Point of View, Corey White is tasked with helping leaders look out into the future and anticipate how technology will impact their organizations and themselves. He is especially focused on the role that machine intelligence will play in almost every aspect of life: from the decisions we make, to the professions we undertake, to how we interact with the world and each other. Corey began his career as a journalist before joining Future Point of View. As a journalist, he developed investigative skills as well as a desire to understand complicated topics and to explain those topics in a way that everyone can understand. He brings these skills to the speaking stage, offering insights into how complex innovation is dramatically altering the world we live in ways that can be both positive and negative. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

DropShot is an algorithmic private investment company specializing in machine learning and artificial intelligence. We believe that the investment process should be as scientific as possible and not influenced by human bias.
Our goal is create products for our investors that are:
1) Purely Quantitative 2) Liquid 3) Transparent

DropShot’s partners have extensive experience building and deploying large scale machine learning solutions. Our systems are constantly sourcing, evaluating, and integrating new alpha sources into our investment decisions, from trading universe selection, to strategy development, to live trading portfolio management and execution. It’s our mission to continuously improve our processes and outcomes for our clients.

Bias Traps in AI: A panel discussing how we understand bias in AI systems, highlighting the latest research insights and why issues of bias matter in concrete ways to real people.

Solon Barocas, Assistant Professor of Information Science, Cornell University
Arvind Narayanan, Assistant Professor of Computer Science, Princeton University
Cathy O’Neil, Founder, ORCAA
Deirdre Mulligan, Associate Professor, School of Information and Berkeley Center for Law & Technology, UC Berkeley
John Wilbanks, Chief Commons Officer, Sage Bionetworks

AI Now 2017 Public Symposium – July 10, 2017

Follow AI Now on Twitter: https://twitter.com/AINowInitiative
Subscribe to our channel: https://www.youtube.com/c/ainowinitiative
Visit our website: https://artificialintelligencenow.com

MIT Introduction to Deep Learning 6.S191: Lecture 8
Algorithmic Bias and Fairness
Lecturer: Ava Soleimany
January 2021

For all lectures, slides, and lab materials: http://introtodeeplearning.com​

Lecture Outline
0:00​ – Introduction and motivation
1:40 – What does “bias” mean?
4:22 – Bias in machine learning
8:32 – Bias at all stages in the AI life cycle
9:25 – Outline of the lecture
10:00 – Taxonomy (types) of common biases
11:29 – Interpretation driven biases
16:04 – Data driven biases – class imbalance
24:02 – Bias within the features
27:09 – Mitigate biases in the model/dataset
33:20 – Automated debiasing from learned latent structure
37:11 – Adaptive latent space debiasing
39:39 – Evaluation towards decreased racial and gender bias
41:00 – Summary and future considerations for AI fairness

Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

Artificial intelligence might be a technological revolution unlike any other, transforming our homes, our work, our lives; but for many – the poor, minority groups, the people deemed to be expendable – their picture remains the same.

“The way these technologies are being developed is not empowering people, it’s empowering corporations,” says Zeynep Tufekci, from the University of North Carolina.. “They are in the hands of the people who hold the data. And that data is being fed into algorithms that we don’t really get to see or understand that are opaque even to the people who wrote the programme. And they’re being used against us, rather than for us.”

In episode two of The Big Picture: The World According to AI we examine practices such as predictive policing, predictive sentencing, as well as the power structures and in-built prejudices that could lead to even more harm than the good its champions would suggest.

In the United States, we travel to one of the country’s poorest neighbourhoods, Skid Row in Los Angeles, to see first-hand how the Los Angeles Police Department is using algorithmic software to police a majority black community.

And in China, we examine the implications of a social credit scoring system that deploys machine learning technologies – new innovations in surveillance and social control that are claimed to be used against ethnic Uighur communities.

As AI is used to make more and more decisions for and about us, from targeting, to policing, to social welfare, it raises huge questions. What will AI be used for in the future? And who will stand to benefit?

Watch Episode 1 here: https://youtu.be/134huBl7MAA

– Subscribe to our channel: http://aje.io/AJSubscribe
– Follow us on Twitter: https://twitter.com/AJEnglish
– Find us on Facebook: https://www.facebook.com/aljazeera
– Check our website: https://www.aljazeera.com/

Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning and artificial intelligence. She is a Distinguished Research Professor at New York University, a Principal Researcher at Microsoft Research New York, and a Visiting Professor at the MIT Media Lab.

December 5th, 2017

While it’s important to consider the diversity of your dataset and the performance of your model across different demographic group, this is just a narrow slice of the issues we need to consider related to bias and fairness. Using machine learning for medicine as a case study, I’ll illustrate some of the broader considerations related to bias, power, and participation that all data scientists need to take into account.

This talk was delivered at the Stanford AI in Medicine & Imaging Symposium on August 5, 2020, as part of a session on Fairness in Clinical Machine Learning.

For more on Practical Data Ethics, please check out my free online course at http://ethics.fast.ai/

This talk was presented at PyBay2018 – the Bay Area local Python conference. See pybay.com for more details about PyBay and click SHOW MORE for more information about this talk.

Abstract:

Algorithms are increasingly used to make life-changing decisions about health care benefits, who goes to jail, and more, and play a crucial role in pushing people towards extremism. Through a series of case studies, I want to debunk several misconceptions about bias and ethics in AI, and propose some healthier principles.

Slides: https://goo.gl/ThXJQm

Presenter:

Rachel Thomas is co-founder of fast.ai, which is making deep learning more accessible, and a researcher-in-residence at University of San Francisco Data Institute. Rachel has a mathematics PhD from Duke and has previously worked as a quant, a data scientist + backend engineer at Uber, and a full-stack software instructor at Hackbright.

Rachel was selected by Forbes as one of 20 “Incredible Women Advancing A.I. Research.” She co-created the course “Practical Deep Learning for Coders,” which is available for free at course.fast.ai and more than 50,000 students have started it. Her writing has made the front page of Hacker News 4x, the top 5 list on Medium, and been translated into Chinese, Spanish, & Portuguese. She is on twitter @math_rachel

This and other PyBay2018 videos are brought to you by our Gold Sponsor Cisco!

Mar.26 — Microsoft Post Doctoral Researcher Timnit Gebru discusses the effects of bias in artificial intelligence. She speaks with Emily Chang on “Bloomberg Technology.”

Algorithms encode data, and that data can be affected by human bias. Industry luminaries explore what this means for artificial intelligence (AI) in the enterprise – and how we can work together to minimize bias and maximize accuracy.

Subscribe: http://www.youtube.com/user/adobe

LET’S CONNECT
Facebook: http://facebook.com/adobe
Twitter: http://twitter.com/adobe
Instagram: http://www.instagram.com/adobe

Check out my collab with “Above the Noise” about Deepfakes: https://www.youtube.com/watch?v=Ro8b69VeL9U
Today, we’re going to talk about five common types of algorithmic bias we should pay attention to: data that reflects existing biases, unbalanced classes in training data, data that doesn’t capture the right value, data that is amplified by feedback loops, and malicious data. Now bias itself isn’t necessarily a terrible thing, our brains often use it to take shortcuts by finding patterns, but bias can become a problem if we don’t acknowledge exceptions to patterns or if we allow it to discriminate.

Crash Course is produced in association with PBS Digital Studios:
https://www.youtube.com/pbsdigitalstudios

Crash Course is on Patreon! You can support us directly by signing up at http://www.patreon.com/crashcourse

Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever:

Eric Prestemon, Sam Buck, Mark Brouwer, Efrain R. Pedroza, Matthew Curls, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, DAVID NOE, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore

Want to find Crash Course elsewhere on the internet?
Facebook – http://www.facebook.com/YouTubeCrashCourse
Twitter – http://www.twitter.com/TheCrashCourse
Tumblr – http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse

CC Kids: http://www.youtube.com/crashcoursekids

#CrashCourse #ArtificialIntelligence #MachineLearning

MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn’t detect her face — because the people who coded the algorithm hadn’t taught it to identify a broad range of skin tones and facial structures. Now she’s on a mission to fight bias in machine learning, a phenomenon she calls the “coded gaze.” It’s an eye-opening talk about the need for accountability in coding … as algorithms take over more and more aspects of our lives.

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at http://www.ted.com/translate

Follow TED news on Twitter: http://www.twitter.com/tednews
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: http://www.youtube.com/user/TEDtalksDirector

Ethics of AI Lab
Centre for Ethics, University of Toronto, March 20, 2018
http://ethics.utoronto.ca

Kathryn Hume
intergrate.ai

#YouTube #HumanBias #AI

How do we know what is real and what is honest in this world of super information and connectivity? Human bias, computer algorithms and social media influencers are becoming ever more part of our human existence and the ability to critically evaluate and understand how they work and what they mean transcends the purchasing of gold and silver that my channel has traditionally focused on and is in my opinion relevant to nearly every decision we make in our modern lives.

Almost everything we do in the physical modern 2020 pandemic world we now live in is connected to the online universe – I choose the word universe because it is almost too vast to comprehend how big this online space is. We get our news, socialise, learn, interact, work and many more day to day activates online and collectively we are being disconnected from the physical world more and more every day. Most importantly (to the big corporations around the world at least) our money is managed and spent mostly online and the ability to influence or manipulate our purchasing decisions is worth trillions and trillions of dollars.

When you see something on YouTube that claims something radical and outlandish, just ask yourself is that right? What are the qualifications of the person making these claims? What ulterior motives are there at play here, what this machine wants me to think?
These are critical questions and they are critical for a reason. The how’s, what’s, why’s, who’s and where’s are the most important questions to ask yourself and others when looking to make decisions and will help you dissect the truth from the false.
To conclude, this modern world of algorithms and machine learning combined with the unfathomable amounts of information we are constantly bombarded with has and will continue to influence us. It’s unavoidable. How we interact with this information can help save us time, money and collectively make the internet and this global social network we find ourselves in more beneficial for all concerned.

What do you think? Let me know down in the comments!

Join the channel and show your support by becoming a BYB Rambling society member today!
https://www.youtube.com/backyardbullion/join

If you would like to support our channel please consider purchasing our T-shirts please visit this link:
https://teespring.com/en-GB/new-byb-hallmarked-t-shirt

or have a look at our website:
https://backyardbullion.com/product-category/all-items/

Stay safe, stay healthy all.

Thanks also to the channel sponsor The Silver Forum!
http://thesilverforum.com/

A 4k Camera & close ups of coins! What more can you want!?

What do you think? Comment below!

Comments welcome below or email me at byb@backyardbullion.com

Follow me on Instagram: @ BackyardBullion
www.instagram.com/backyardbullion

Thanks for watching and I will see you next time!

truth, human bias, algorithm, modern world, internet, learning, understanding, critical thinking, critical evaluation, fake news, false, fake reviews, influencer, YouTube, Facebook, machine learning, twitter, critical, silver, gold, bullion, buying online, reviews, money, bias, AI, Artificial intelligence, Backyard Bullion, interesting discussion, discussion, thought provoking, radical, interest, computers, modern, information, the internet

Original post: https://www.gcppodcast.com/post/episode-114-machine-learning-bias-and-fairness-with-timnit-gebru-and-margaret-mitchell/

This week, we dive into machine learning bias and fairness from a social and technical perspective with machine learning research scientists Timnit Gebru from Microsoft and Margaret Mitchell (aka Meg, aka M.) from Google. They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going.

“Man is to computer programmer as woman is to____”

Common sense says that the missing term should be computer programmer because the term is not intrinsically gendered, unlike king and queen, but a computer with a standard word embedding system would probably complete it “Man is to computer programmer as woman is to homemaker.”

In this episode, we explain how our unconscious biases can be passed down to machine learning algorithms.

Read more at https://go.unbabel.com/blog/gender-bias-artificial-intelligence/

Illustration, Animation and Sound Design: Favo Studio
https://vimeo.com/favostudio

►►

Every day it seems like machines learn more and more and the content we consume says less and less. That’s why we’re building Understanding with Unbabel — a deeply human take on language, artificial intelligence, and the way they’re transforming customer experience.

About Unbabel
At Unbabel, we believe language shouldn’t stand in the way of relationships. By combining human expertise and artificial intelligence, we give businesses and their customers the ability to understand each other, make smarter choices, and have richer experiences.

Follow us on:
Facebook: https://www.facebook.com/unbabel/
Twitter: https://twitter.com/Unbabel
Linkedin: https://www.linkedin.com/company/unbabel/
Instagram: https://instagram.com/unbabel/

Professor Christopher Manning, Stanford University & Margaret Mitchell, Google AI
http://onlinehub.stanford.edu/

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)

To follow along with the course schedule and syllabus, visit: http://web.stanford.edu/class/cs224n/index.html#schedule

To get the latest news on Stanford’s upcoming professional programs in Artificial Intelligence, visit: http://learn.stanford.edu/AI.html

To view all online courses and programs offered by Stanford, visit: http://online.stanford.edu

The programmes behind artificial intelligence are in almost every part of our lives, but there’s an emerging problem with them: algorithmic bias.

Subscribe: http://trt.world/subscribe
Livestream: http://trt.world/ytlive
Facebook: http://trt.world/facebook
Twitter: http://trt.world/twitter
Instagram: http://trt.world/instagram
Visit our website: http://trt.world

How to avoid AI hiring AI recruiting BIAS?
4 ways to avoid discriminating AI hiring?
AI Recruiting Trends in 2019
Algorithm hiring causes AI Hiring Bias

Artificial intelligence for Everyone.

Everything about Applied Artificial Intelligence, Machine Learning in real world.

Mind Data Intelligence is Brian Ka Chan – Applied AI Strategist, Technology/Data/Analytics Executive, ex-Oracle Architect, ex-SAP Specialist. “Artificial intelligence for Everyone” is my vision about the channel. And it will also include fintech, smart cities, and all latest cutting edge technologies.

The goal of the channel to sharing AI & Machine Learning knowledge, expand common sense, and demystify AI Myths. We want everyone from all level of walks to understand Artificial Intelligence.
http://TrustifyAI.com

Best Artificial Intelligence Videos

Twitter: https://twitter.com/MindDataAI
Linkedin: https://www.linkedin.com/in/briankachan
Linkedin Page: https://www.linkedin.com/company/mind-data-artificial-intelligence/

AI Strategy Leader Brian Ka Chan is also the author of “Taming Artificial Intelligence: Mind-as-a-Service: The Actionable Human-Centric AI Evolution Blueprint for Individuals, Business”

Discover how CSP enables you to modernize your applications no matter where they are. Plus, learn how to undo human bias at scale with KubeFlow.

Watch more:
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution. Learn how, by watching this video!

#MachineLearning #ODSC #DataScience #AI

Do You Like This Video? Share Your Thoughts in Comments Below
Also, You can visit our website and choose the nearest ODSC Event to attend and experience all our Trainings and Workshops:
odsc.com/california
odsc.com/london

At Google researchers and engineers, have a goal to make machine learning technology work for everyone.

We think that machines can be objective because they don’t worry about human emotion. Even though that’s the case, AI (artificial intelligence)
systems may show bias because of the data that is used to train them. We have to be aware of this and correct for it.

Filmmaker Robin Hauser is a proven storyteller of complex topics. In her award-winning documentary, “Code—Debugging the Gender Gap” she examined the dearth of women in computer coding.

Now, in her latest film, “Bias”, Robin posits compelling questions: how have primal human survival instincts made racial and gender bias an innate part of ourselves; and with the rise of machine learning, with increasing reliance on AI, can we protect Artificial Intelligence from our inherent biases? Her film is an engrossing exploration and clarion call that will frighten and also enlighten.

Today’s Guest: Robin Hauser
@biasfilm
https://www.biasfilm.com/

Interviewer: Jim Kamp
http://polychromemedia.com/jameskamp/
@kampjames

Presentation was hold during the Oct 17, 2018 Meetup: “Biased? – About the story you didn’t know you’re telling”

https://www.meetup.com/de-DE/Artificial-Intelligence-Suisse/events/gbnvppyxlbmc/

This talk was presented at PyBay2019 – 4th annual Bay Area Regional Python conference. See pybay.com for more details about PyBay and click SHOW MORE for more information about this talk.

Description
Through a series of case studies, I will illustrate different types of algorithmic bias, debunk common misconceptions, and share steps towards addressing the problem.

Original slides: https://t.ly/9gO5k

About the speaker
Rachel Thomas is a professor at the University of San Francisco Data Institute and co-founder of fast.ai, which created the “Practical Deep Learning for Coders” course that over 200,000 students have taken and which has been featured in The Economist, MIT Tech Review, and Forbes. She was selected by Forbes as one of 20 Incredible Women in AI, earned her math PhD at Duke, and was an early engineer at Uber. Rachel is a popular writer and keynote speaker. In her TEDx talk, she shares what scares her about AI and why we need people from all backgrounds involved with AI.

Sponsor Acknowledgement
This and other PyBay2019 videos are via the help of our media partner AlphaVoice (https://www.alphavoice.io/)!

#pybay #pybay2019 #python #python3 #gdb

Machine learning is notorious for reinforcing existing bias, but it can also be used to counteract it. Primer developed an ML system called Quicksilver that identifies and describes notable women of science who are missing from Wikipedia. It takes millions of news documents as input and generates first-draft biographical articles.

Kubeflow made it far easier to scale and maintain this system. Come learn how Primer partnered with Google to migrate to GCP in order to:

* Deploy the same code to multiple physical environments
* Affordably scale an existing ML app using Kubeflow with auto-provisioning
* Continuously train hundreds of thousands of models using Kubeflow pipelines

Watch more:
Next ’19 ML & AI Sessions here → https://bit.ly/Next19MLandAI
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions

Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

Speaker(s): John Bohannon, Michelle Casbon

Session ID: MLAI206
product:Compute Engine;

Machine learning is notorious for reinforcing existing bias, but it can also be used to counteract it. Primer developed an ML system called Quicksilver that identifies and describes notable women of science who are missing from Wikipedia. It takes millions of news documents as input and generates first-draft biographical articles.

Kubeflow made it far easier to scale and maintain this system. Come learn how Primer partnered with Google to migrate to GCP in order to:

* Deploy the same code to multiple physical environments
* Affordably scale an existing ML app using Kubeflow with auto-provisioning
* Continuously train hundreds of thousands of models using Kubeflow pipelines

Watch more:
Next ’19 ML & AI Sessions here → https://bit.ly/Next19MLandAI
Next ‘19 All Sessions playlist → https://bit.ly/Next19AllSessions

Subscribe to the GCP Channel → https://bit.ly/GCloudPlatform

Speaker(s): John Bohannon, Michelle Casbon

Session ID: MLAI206
product:Compute Engine;

This keynote introduces the first AI based on Unsupervised Learning. Automatically discover insights in your data, avoid human bias, and empower your team with the power to discover previously hidden insights that they can use to revolutionize your business.

What will a machine-dominated future look like? Fresh? Maybe funky? Colorful and fun? When we get to a point where AI and robots are conscious, we can only wonder if their dreams and imaginations will operate anything like our own.

We often get lost in our own thought processes, often times biassing ourselves from past experiences. But when we take our imperfect selves and build AI’s and intelligences beyond neurons, it’s hard to think of a logic flow without any type of bias. It’s impossible to separate ourselves from our biases, so if a computer can do it with a flip of a switch, think of all the implications it can have. On criminal justice, on decision making, on investing, and perhaps most importantly, in shaping public policy for the world.

Video original sources:
+ Motion Graphic COOL Design (Cinema 4D) 2018 (https://youtu.be/Bpge5OmKrS8?t=21)
+ Lusine – Just A Cloud (https://youtu.be/10Jg_25ytU0?t=130)
+ Machine Learning and Human Bias (https://www.youtube.com/watch?v=59bMh59JQDo)
+ THE COLOR OF LOVE (https://vimeo.com/298592171)

The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.

Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance – all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.

Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society’s divides, such as the digital divide or income inequality?

This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields:

Keynotes:

– Prof. Moshe Vardi: “Deep Learning and the Crisis of Trust in Computing”
– Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI”

Panelists: Ethics and Bias in AI

– Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University
– Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien
– Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna
– Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna
– Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG

Moderator: Markus Mooslechner, Terra Mater Factual Studios

The evening will be complemented by networking & discussions over snacks and drinks.

More details: http://www.aiethics.cisvienna.com

review.chicagobooth.edu | As artificial intelligence is used more and more to aid—or sometimes replace—human decision-making, how worried should we be that these applications will automate bias against particular groups? Chicago Booth’s Sendhil Mullainathan says we should be alert to this concern, but also aware of one of AI’s great advantages: while it is subject to biases in data, it doesn’t apply its own layer of bias, as humans tend to. Although completely unbiased data is almost impossible to generate, Mullainathan says that designers of AI systems can at least be alert to such biases and create with those concerns in mind. And while it’s reasonable to be concerned about AI perpetuating biases in contexts such as hiring or criminal justice, it’s just these sorts of places, Mullainathan says—the places we’re most concerned about bias skewing decision-making—that AI has the most potential to improve equity.

Dee Smith of Strategic Insight Group sits down with Raoul Pal to discuss the confluence of behavioral economics and technology. The principles of behavioral economics combined with machine learning and algorithms can lead to amazing results, but what happens when human bias bleeds into the very algorithms we believe protect us from it? This video is excerpted from a piece published on Real Vision on September 7, 2018 entitled “Modern Manipulation: Behavioral Economics in a Technological World.”

Watch more Real Vision™ videos: http://po.st/RealVisionVideos
Subscribe to Real Vision™ on YouTube: http://po.st/RealVisionSubscribe
Watch more by starting your 14-day free trial here: https://rvtv.io/2wcQFLN

About Future Fears:
What’s coming that we should all be worried about? What keeps the world s greatest investors up at night? Household names of finance discuss the terrifying potential risks posed by artificial intelligence, the rise of social media, autonomous vehicles and more.

About Real Vision™:
Real Vision™ is the destination for the world’s most successful investors to share their thoughts about what’s happening in today’s markets. Think: TED Talks for Finance. On Real Vision™ you get exclusive access to watch the most successful investors, hedge fund managers and traders who share their frank and in-depth investment insights with no agenda, hype or bias. Make smart investment decisions and grow your portfolio with original content brought to you by the biggest names in finance, who get to say what they really think on Real Vision™.

Connect with Real Vision™ Online:
Twitter: https://rvtv.io/2p5PrhJ
Instagram: https://rvtv.io/2J7Ddlw
Facebook: https://rvtv.io/2NNOlmu
Linkedin: https://rvtv.io/2xbskqx

Technology, Incentives & Cognitive Bias (w/ Dee Smith & Raoul Pal)
https://www.youtube.com/c/RealVisionTelevision

Transcript:
For the full transcript: https://rvtv.io/2wcQFLN
There is an emerging narrative that is claiming that computer programs do have
biases, and the biases are based on the people who write programs.
One of the interesting topics in that, of course, confirmation bias is one of the most deadly of the
behavioral economic biases that we look to find information that confirms what we already
believe, instead of finding information that could falsify it, which is what scientific method is based
on, is that you could try to falsify. You don’t try to verify.
That the whole American adventure in Iraq based on the intelligence finding that there were
weapons of mass destruction, which was in large part based on ignoring evidence that there
weren’t. It was simply selective use of intelligence, which is confirmation bias. It can be incredibly
problematic. But it’s something that is so, there’s some kind of a switch that flips and you decide,
oh, it’s an aha moment. I see it, I got it, I understand it now. Let me find all these things that tell me
I’m right.
RAOUL PAL: Because humans are so delusional. I mean, I fall into that bias all the time, as
everybody does. And this is why the machine is so powerful and why we have to be actually truly
concerned. Not flippantly concerned, but truly concerned, because there is no bias. And it’s the
massive ability to process data in ways that the human brain can’t.
We can process incredible data. Everything we’re seeing now and all the colors. Machines are
nowhere near that, nowhere near our cognitive abilities in certain ways. We cannot process a
fixed amount of, a fixed type of data in the quantity that machines can without a bias, because we
need patterns to fill in the blanks.

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×