“Overcoming Cognitive Bias [EuroPython 2017 – Talk – 2017-07-14 – Anfiteatro 2] [Rimini, Italy] Starting with a brief description of how built-in mechanisms in our brains lead to cognitive bias, the talk will address how a variety of cognitive biases manifest in the Python and tech communities, and how to overcome them. License: This video is licensed under the CC BY-NC-SA 3.0 license: https://creativecommons.org/licenses/by-nc-sa/3.0/ Please see our speaker release agreement for details: https://ep2017.europython.eu/en/speaker-release-agreement/
Speaker(s): Somaieh Nikpoor Host: Serena McDonnell Find the recording, slides, and more info at https://ai.science/e/bias-and-fairness-in-ai-overview-of-bias-and-fairness-in-ai–5mYsxRyBkVwApJhhJhH4 Motivation / Abstract Many companies are looking to deploy AI systems across their operations. AI can help identify and reduce the impact of human biases, but it can also make the problem worse by deploying biases at scale. The first part of this presentation covers the different types of bias and the definition of fairness in the algorithm. The second part reviews the paper “Model Cards for Model Reporting” by Margaret Mitchelle and Simone Wu, et al (2018). About the speaker: Somaieh Nikpoor is a Research Advsior in AI/ML for the Government of Canada. She received her PhD from the University of Ottawa in Economics. —— #AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details
With increasing regularity we see stories in the news about machine learning algorithms causing real-world harm. People’s lives and livelihood are affected by the decisions made by machines. Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Now you can become part of the solution.
I know, the title of this talk is like saying the only way to stop a bad Terminator is to use a good Terminator but hear me out. Human biases influence the outputs of an AI model. AI amplifies bias and socio-technical harms impact fairness, adoption, safety, and the well being. These harms disproportionately affect legally protected classes of individuals and groups in the United States. It’s so fitting that this year’s theme for International Women’s day was #BreakTheBias so join Noble as he returns to Strangeloop to expand on the topic of bias, deconstructs techniques to de-bias datasets by example for building intelligent systems that are fair and equitable while increasing trust and adoption. Noble Ackerson Former Google Developers Expert, Responsible AI @nobleackerson Mr. Ackerson is a Director of Product at Ventera Corporation focused on AI/ML and Data Science enabling responsible use of AI practices across commercial and federal clients. He also serves as President of Cyber XR where he focuses on Safety, Privacy, and Diversity intersections in XR. Noble is a Certified AI Product Manager, a Google Certified Design Sprint Master, and formally a Google Developers Expert for Product Strategy. His professional career is centered at the intersection of data ethics and emergent tech. From implementing practical data governance privacy principles, frameworks, empowering enterprises with the tools to eliminate bias and promote fairness in machine learning, Noble has pushed the limits of mobile, web, wearable, and spatial computing applications the human-centered way. ——– Sponsored by: ——– Stream is [More]
More at https://www.philosophytalk.org/shows/cognitive-bias. Aristotle thought that rationality was the faculty that distinguished humans from other animals. However, psychological research shows that our judgments are plagued by systematic, irrational, unconscious errors known as ‘cognitive biases.’ In light of this research, can we really be confident in the superiority of human rationality? How much should we trust our own judgments when we aware of our susceptibility to bias and error? And does our awareness of these biases obligate us to counter them? John and Ken shed their biases with Brian Nosek from the University of Virginia, co-Founder and Executive Director of the Center for Open Science.
Today we are joined by Wesley Gray who is the CEO of Alpha Architect, a firm in the US that specializes in concentrated factor strategies. Having completed his MBA and PhD at the University of Chicago – the Harvard of the finance world – Wes is an authoritative voice when it comes to quantitative research and factor investing. Incredibly, he took a 4-year break during his PhD, joined the marines and went to Iraq, and has also written several books. He went from value investor and stock-picker to having a strong quant focus and realized that it was possible to eliminate the human biases while still capturing the factor premiums. Our talk with Wes illuminates the nuanced nature of factor investing, behaviour versus risk-based factor premiums and active management versus passive and indexing. He discusses the process of collecting data for his PhD, the rules according to which they structure portfolios, how their boutique firm differs from larger advisor companies and who their ideal client is. Wes also shares his views on selecting the best quant model, hedge funds, value premiums and market-cap indexing. Join us for another insightful episode! Key Points From This Episode: 3:31 Wesley’s experience as a stock picker and riding the wave of small-cap value; The Value Investors Club as a data source to test stock-picking skills for his PhD. 9:38 From stock picker to a quant and realizing the need to eliminate biases. 14:26 The rules that govern how they build portfolios in his firm [More]
Cognitive Bias Psychology Hindi cognitive Bias hindi psychology hindi psychology lecture hindi IGNOU MAPC lecture hindi 12 Cognitive Biases That Can Impact Search Committee Decisions 1. Anchoring Bias Over-relying on the first piece of information obtained and using it as the baseline for comparison. For example, if the first applicant has an unusually high test score, it might set the bar so high that applicants with more normal scores seem less qualified than they otherwise would. PsychCentral: The Anchoring Effect and How it Impacts Your Everyday Life 2. Availability Bias Making decisions based on immediate information or examples that come to mind. If search committee members hear about a candidate from Georgia who accepted a job and then quit because of the cold weather, they might be more likely to assume that all candidates from the southern U.S. would dislike living in Minnesota. VerywellMind: Availability Heuristic and Making Decisions 3. Bandwagon Effect A person is more likely to go along with a belief if there are many others who hold that belief. Other names for this are “herd mentality” or “group think.” In a search, it may be difficult for minority opinions to be heard if the majority of the group holds a strong contrary view. WiseGEEK: What is a Bandwagon Effect? Psychology Today: The Bandwagon Effect 4. Choice-supportive Bias Once a decision is made, people tend to over-focus on its benefits and minimize its flaws. Search committee members may emphasize rationale that supports decisions they have made in the [More]
In this episode, Gary discussed the recent HIMSS conference with John Glaser, Ph.D., Executive-in-Residence, Harvard Medical School, Kaveh Safavi, M.D., J.D., Senior Managing Director, Accenture, and Suchi Saria, Ph.D., Founder and CEO, Bayesian Health. They discussed the utility of in-person meetings, as well as the many cutting edge technologies that will be influencing healthcare systems in the near future. For More Info about The Gary Bisbee Show Visit: https://www.thinkmedium.com/programs/the-gary-bisbee-show Suchi Saria, Ph.D., is the Founder and CEO of Bayesian Health, the John C. Malone Associate Professor of computer science, statistics, and health policy, and the Director of the Machine Learning and Healthcare Lab at Johns Hopkins University. She has published over 50 peer-reviewed articles with over 3000 citations and was recently described as “the future of 21st century medicine” by The Sloan Foundation. Her research has pioneered the development of next-generation diagnostic and treatment planning tools that use statistical machine learning methods to individualize care. At Bayesian Health, Dr. Saria is leading the charge to unleash the full power of data to improve healthcare, unburdening caregivers and empowering them to save lives. Backed by 21 patents and peer-reviewed publications in leading technical and clinical journals, Bayesian leverages best-in-class machine learning and behavior change management expertise to help health organizations unlock improved patient care outcomes at scale by providing real-time precise, patient-specific, and actionable insights in the EMR. Dr. Saria earned her M.Sc. and Ph.D. from Stanford University working with Professor Daphne Koller. She visited Harvard University as an NSF Computing [More]
Can you figure out the rule? Did you see the exponents pattern? http://youtu.be/AVB8vRC6HIY Why do you make people look stupid? http://bit.ly/12Fmlpl How do you investigate hypotheses? Do you seek to confirm your theory – looking for white swans? Or do you try to find black swans? I was startled at how hard it was for people to investigate number sets that didn’t follow their hypotheses, even when their method wasn’t getting them anywhere. In the video I say “when people came to Australia…” by which I meant, “when Europeans who believed all swans were white came to Australia…” I did not mean any offence to Indigenous Australians who were already in Australia at that time. Please accept my apologies for the poor phrasing if you were offended by it. This video was inspired by The Black Swan by Nassim Taleb and filmed by my mum. Thanks mum! Partly my motivation came from responses to my Facebook videos – social media marketers saying ‘Facebook ads have worked for me so there can’t be fake likes.’ Just because you have only seen white swans, doesn’t mean there are no black ones. And in fact marketers are only looking for white swans. They think it was invalid of me to make the fake Virtual Cat page: ‘well of course if it’s a low quality page you’re going to get low quality likes.’ But my point is this is black swan bait, something they would never make because their theory is confident in the [More]
AI applications are ubiquitous – and so is their potential to exhibit unintended bias. Algorithmic and automation biases and algorithm aversion all plague the human-AI partnership, eroding trust between people and machines that learn. But can bias be eradicated from AI? AI systems learn to make decisions based on training data, which can include biased human decisions and reflect historical or social inequities, resulting in algorithmic bias. The situation is exacerbated when employees uncritically accept the decisions made by their artificial partners. Equally problematic is when workers categorically mistrust these decisions. Join our panel of industry and academic leaders, who will share their technological, legal, organizational and social expertise to answer the questions raised by emerging artificial intelligence capabilities. Moderator: Dr Fay Cobb Payton is a Professor of Information Systems & Technology at NC State’s Poole College of Management and a Program Director at the National Science Foundation in the Division of Computer and Network Systems Panelists: Timnit Gebru- Research scientist and the co-lead of the Ethical AI team at Google and the co-founder of Black in AI, a place for fostering collaborations to increase the presence of Black people in the field of Artificial Intelligence Brenda Leong- Senior Counsel and Director of Artificial Intelligence and Ethics at the Future of Privacy Forum Professor Mohammad Jarrahi- Associate Professor at UNC’s School of Information and Library Science focused on the intersection of technology and society Chris Wicher- Rethinc. Labs AI Research Fellow, former Director of AI Research at KPMG’s AI Center [More]
Welcome to SIBGRAPI 2021 – the 34th Conference on Graphics, Patterns and Images! See our full program at https://www.inf.ufrgs.br/sibgrapi2021/program.php#content . This year we have an entirely virtual event, occurring from Monday October 18th to Friday October 22nd, 2021 (a total of 5 days), in a week filled with exciting presentations in Image Processing, Computer Graphics, Pattern Recognition, and Computer Vision.
Algorithms risk magnifying human bias and error on an unprecedented scale. Rachel Statham explains how they work and why we have to ensure they don’t perpetuate historic forms of discrimination Read IPPR’s automation report: http://bit.ly/2XJ2idN
For full forum click here: https://youtu.be/ucic8cuEd6A INSTAGRAM: https://www.instagram.com/veritasforum FACEBOOK: https://www.facebook.com/veritasforum SUBSCRIBE: https://www.youtube.com/subscription_center?add_user=VeritasForum Find this and many other talks at http://www.veritas.org/engage Over the past two decades, The Veritas Forum has been hosting vibrant discussions on life’s hardest questions and engaging the world’s leading colleges and universities with Christian perspectives and the relevance of Jesus. Learn more at http://www.veritas.org, with upcoming events and over 600 pieces of media on topics including science, philosophy, music, business, medicine, and more!
How seriously is the data science industry taking the issue of bias in machine learning? In this clip taken from a recent CareerFoundry live event senior Data Scientist Tom Gadsby will share some thoughts on the matter! Want more content like this? Check out CareerFoundry’s events page for more deep dive data based content, and much more: https://careerfoundry.com/en/events/ — Looking to start a career in Data? Take your first steps with CareerFoundry’s free data analytics short course: https://bit.ly/CareerFoundryFreeDataAnalyticsShortCourse_023 Want a deeper dive on some key UX topics? Check out CareerFoundry’s blog here: https://careerfoundry.com/en/blog/ Thanks for watching! #DataAnalytics #DataScience #Shorts Want more from CareerFoundry? Check out our other social media channels and blog here: 🔍 https://linktr.ee/CareerFoundry​ For more information on our programs, visit us at: 🖥 https://careerfoundry.com/ Data Science – Bias In Machine Learning Algorithms https://youtu.be/oIEFa1XuDJk
Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline? In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360. AI Fairness 360 (AIF360, https://aif360.mybluemix.net/) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia. Trisha Mahoney is an AI Tech Evangelist for IBM with a focus on Fairness & Bias. Trisha has spent the last 10 years working on Artificial Intelligence and Cloud solutions at several Bay Area tech firms including (Salesforce, IBM, Cisco). Prior to that, Trisha spent 8 years working as a data scientist in the chemical detection space. She holds an Electrical Engineering degree and an MBA in Technology Management. https://aif360.mybluemix.net/ https://aif360.slack.com/ http://ibm.biz/Bdqbd2
Authors: Yan Huang; Param Vir Singh, Runshan Fu, Carnegie Mellon University Artificial intelligence (AI) and machine learning (ML) algorithms are widely used throughout our economy in making decisions that have far-reaching impacts on employment, education, access to credit, and other areas. Initially considered neutral and fair, ML algorithms have recently been found increasingly biased, creating and perpetuating structural inequalities in society. With the rising concerns about algorithmic bias, a growing body of literature attempts to understand and resolve the issue of algorithmic bias. In this tutorial, we discuss five important aspects of algorithmic bias. We start with its definition and the notions of fairness policy makers, practitioners, and academic researchers have used and proposed. Next, we note the challenges in identifying and detecting algorithmic bias given the observed decision outcome, and we describe methods for bias detection. We then explain the potential sources of algorithmic bias and review several bias-correction methods. Finally, we discuss how agents’ strategic behavior may lead to biased societal outcomes, even when the algorithm itself is unbiased. We conclude by discussing open questions and future research directions.
To watch the Q&A, click here: https://www.youtube.com/watch?v=rRUGSlVOW1k& Iris Bohnet Professor of Public Policy, Harvard Kennedy School Director of the Women and Public Policy Program, HKS Co-chair, Behavioral Insights Group, HKS Cynthia Dwork Gordon McKay Professor of Computer Science, Harvard Paulson School of Engineering Radcliffe Alumnae Professor, Radcliffe Institute for Advanced Study Alex “Sandy” Pentland Professor of Media Arts and Sciences, Massachusetts Institute of Technology Toshiba Professor, MIT Media Lab Entrepreneurship Program Director, MIT Sheila Jasanoff (moderator) Pforzheimer Professor of Science and Technology Studies, Harvard Kennedy School Director, Program on Science, Technology and Society, HKS The Future Society is launching a 12-month civic consultation on the governance of Artificial Intelligence. Run on the collective intelligence platform Assembl, this debate is open to everyone and will involve citizens, experts and practitioners to better understand the dynamics of the rise of AI, its consequences and how to govern this revolution. The debate can be found here https://agora2.bluenove.com/ai-consultation :and soon to be available here www.ai-initiative.org/AI-consultation.
The meeting will focus on considerations to reduce bias and increase fairness in artificial intelligence used to make health care decisions. Speakers will highlight how bias manifests in health care, methods to prevent, mitigate, and test for bias, and promising examples to address bias. The meeting will end with a discussion of how different federal agencies are thinking about bias and fairness in health care
Kriti Sharma explores how the lack of diversity in tech is creeping into our artificial intelligence, offering three ways we can start making more ethical algorithms. This talk was filmed at TEDxWarwick. Watch the full talk here: https://youtu.be/oo5KXFTIAwE Start each day with short, eye-opening ideas from some of the world’s greatest TEDx speakers. Hosted by Atossa Leoni, TEDx SHORTS will give you the chance to immerse yourself in surprising knowledge, fresh perspectives, and moving stories from some of our most compelling talks. Less than 10 minutes a day, everyday. All TEDx events are organized independently by volunteers in the spirit of TED’s mission of ideas worth spreading. To learn more about the TEDx SHORTS podcast, the TEDx program, or give feedback on this episode, please visit http://go.ted.com/tedxshorts.
Myself Shridhar Mankar a Engineer l YouTuber l Educational Blogger l Educator l Podcaster. My Aim- To Make Engineering Students Life EASY. Website – https://5minutesengineering.com 5 Minutes Engineering English YouTube Channel – https://m.youtube.com/channel/UChTsiSbpTuSrdOHpXkKlq6Q Instagram – https://www.instagram.com/5minutesengineering/?hl=en A small donation would mean the world to me and will help me to make AWESOME videos for you. • UPI ID : 5minutesengineering@apl Playlists : • 5 Minutes Engineering Podcast : https://youtube.com/playlist?list=PLYwpaL_SFmcCTAu8NRuCaD3aTEgHLeF0X • Aptitude : https://youtube.com/playlist?list=PLYwpaL_SFmcBpa1jwpCbEDespCRF3UPE5 • Machine Learning : https://youtube.com/playlist?list=PLYwpaL_SFmcBhOEPwf5cFwqo5B-cP9G4P • Computer Graphics : https://youtube.com/playlist?list=PLYwpaL_SFmcAtxMe7ahYC4ZYjQHun_b-T • C Language Tutorial for Beginners : https://youtube.com/playlist?list=PLYwpaL_SFmcBqvw6QTRsA8gvZL3ao2ON- • R Tutorial for Beginners : https://youtube.com/playlist?list=PLYwpaL_SFmcCRFzBkZ-b92Hdg-qCUfx48 • Python Tutorial for Beginners : https://youtube.com/playlist?list=PLYwpaL_SFmcCJu4i6UGMkMx1p3yYZJsbC • Embedded and Real Time Operating Systems (ERTOS) : https://youtube.com/playlist?list=PLYwpaL_SFmcBpuYagx0JiSaM-Bi4dm0hG • Shridhar Live Talks : https://youtube.com/playlist?list=PLYwpaL_SFmcD21x33RkmGvcZtrnWlTDdI • Welcome to 5 Minutes Engineering : https://youtube.com/playlist?list=PLYwpaL_SFmcCwG02L6fm0G5zmzpyw3eyc • Human Computer Interaction (HCI) : https://youtube.com/playlist?list=PLYwpaL_SFmcDz_8-pygbcNvNF0DEwKoIL • Computer Organization and Architecture : https://youtube.com/playlist?list=PLYwpaL_SFmcCaiXeUEjcTzHwIfJqH1qCN • Deep Learning : https://youtube.com/playlist?list=PLYwpaL_SFmcD-6P8cuX2bZAHSThF6AYvq • Genetic Algorithm : https://youtube.com/playlist?list=PLYwpaL_SFmcDHUTN26NXKfjg6wFJKDO9R • Cloud Computing : https://youtube.com/playlist?list=PLYwpaL_SFmcCyQH0n9GHfwviu6KeJ46BV • Information and Cyber Security : https://youtube.com/playlist?list=PLYwpaL_SFmcArHtWmbs_vXX6soTK3WEJw • Soft Computing and Optimization Algorithms : https://youtube.com/playlist?list=PLYwpaL_SFmcCPUl8mAnb4g1oExKd0n4Gw • Compiler Design : https://youtube.com/playlist?list=PLYwpaL_SFmcC6FupM–SachxUTOiQ7XHw • Operating System : https://youtube.com/playlist?list=PLYwpaL_SFmcD0LLrv7CXxSiO2gNJsoxpi • Hadoop : https://youtube.com/playlist?list=PLYwpaL_SFmcAhiP6C1qVorA7HZRejRE6M • CUDA : https://youtube.com/playlist?list=PLYwpaL_SFmcB73J5yO6uSFUycHJSA45O0 • Discrete Mathematics : https://youtube.com/playlist?list=PLYwpaL_SFmcDKuvj-wIgDnHA5JTfUwrHv • Theory of Computation (TOC) : https://youtube.com/playlist?list=PLYwpaL_SFmcDXLUrW3JEq2cv8efNF6UeQ • Data Analytics : https://youtube.com/playlist?list=PLYwpaL_SFmcD_agAK_MpCDJdDXFuJqS9X • Software Modeling and Design : https://youtube.com/playlist?list=PLYwpaL_SFmcD1pjNSpEm2pje3zPrSiflZ • Internet Of Things (IOT) : https://youtube.com/playlist?list=PLYwpaL_SFmcB8fDd64B8SkJiPpEIzpCzC • Database Management Systems (DBMS) : https://youtube.com/playlist?list=PLYwpaL_SFmcBU4HS74xGTK1cAFbY0rdVY • Computer Network (CN) : https://youtube.com/playlist?list=PLYwpaL_SFmcAXkWn2IR-l_WXOrr0n851a • Software Engineering and Project Management : https://youtube.com/playlist?list=PLYwpaL_SFmcCB7zUM0YSDR-1mM4KoiyLM • Design and Analysis of Algorithm : [More]
Director Robin Hauser and tribe of women behind, “bias,” talks to us about the team’s hopes for their film, the importance of confronting our own implicit biases, and their work with bias in Artificial Intelligence (AI). “Bias” confronts unconscious and implicit bias in all walks of life: from CEOs and police enforcement to professional soccer players like Abby Wambach. With the toxic effect of bias making headlines every day, the time for this film is now. Watch the trailer: https://www.imdb.com/title/tt7137804/?ref_=ttpl_pl_tt To read more about the Implicit Association Test and unconscious bias: https://implicit.harvard.edu/implicit/faqs.html Watch Robin Hauser’s TED talk: “Can we protect AI from our biases?”: https://www.ted.com/talks/robin_hauser_can_we_protect_ai_from_our_biases/up-next
Over the past year, discourse about the ethical risks of machine learning has largely shifted from speculative fear about rogue super intelligent systems to critical examination of machine learning’s propensity to exacerbate patterns of discrimination in society. This talk explains how and why bias creeps into supervised machine learning systems and proposes a framework businesses can apply to hold algorithmic systems accountable in a way that is meaningful to people impacted by systems. You’ll learn why it’s important to consider bias throughout the entire machine learning product lifecycle (not just algorithms), how to assess tradeoffs between accuracy and explainability, and what technical solutions are available to reduce bias and promote fairness.
A common objection to concerns about bias in machine learning models is to point out that humans are really biased too. This is correct, yet machine learning bias differs from human bias in several key ways that we need to understand.