AI applications are ubiquitous – and so is their potential to exhibit unintended bias. Algorithmic and automation biases and algorithm aversion all plague the human-AI partnership, eroding trust between people and machines that learn. But can bias be eradicated from AI? AI systems learn to make decisions based on training data, which can include biased human decisions and reflect historical or social inequities, resulting in algorithmic bias. The situation is exacerbated when employees uncritically accept the decisions made by their artificial partners. Equally problematic is when workers categorically mistrust these decisions. Join our panel of industry and academic leaders, who will share their technological, legal, organizational and social expertise to answer the questions raised by emerging artificial intelligence capabilities. Moderator: Dr Fay Cobb Payton is a Professor of Information Systems & Technology at NC State’s Poole College of Management and a Program Director at the National Science Foundation in the Division of Computer and Network Systems Panelists: Timnit Gebru- Research scientist and the co-lead of the Ethical AI team at Google and the co-founder of Black in AI, a place for fostering collaborations to increase the presence of Black people in the field of Artificial Intelligence Brenda Leong- Senior Counsel and Director of Artificial Intelligence and Ethics at the Future of Privacy Forum Professor Mohammad Jarrahi- Associate Professor at UNC’s School of Information and Library Science focused on the intersection of technology and society Chris Wicher- Rethinc. Labs AI Research Fellow, former Director of AI Research at KPMG’s AI Center [More]
Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts Ben Green, Yiling Chen CSCW’21: ACM Conference on Computer-Supported Cooperative Work and Social Computing Session: Algorithms and Decision Making Abstract Governments are increasingly turning to algorithmic risk assessments when making important decisions (such as whether to release criminal defendants before trial). Policymakers assert that providing public servants with algorithms will improve human risk predictions and thereby lead to better (e.g., fairer) decisions. Yet because many policy decisions require balancing risk-reduction with competing goals, improving the accuracy of predictions may not necessarily improve the quality of decisions. Through an experiment with 2,140 lay participants simulating two high-stakes government contexts, we interrogate the assumption that improving human prediction accuracy with risk assessments will improve human decisions. We provide the first direct evidence that risk assessments can systematically alter how people factor risk into their decisions. These shifts counteract the potential benefits of improved prediction accuracy. In the pretrial setting of our experiment, the risk assessment made participants more sensitive to increases in perceived risk when making decisions; this shift increased the racial disparity in pretrial detention by 1.9%. In the government home improvement loans setting of our experiment, the risk assessment made participants more risk-averse when making decisions; this shift reduced government aid by 8.3%. These results demonstrate the potential limits and harms of efforts to improve public policy by incorporating predictive algorithms into multifaceted policy decisions. If these observed behaviors occurred in practice, presenting algorithms to public servants would [More]
Stuart Russell is a British computer scientist known for his contributions to artificial intelligence. He warns about the risks involved in the creation of AI systems. Artificial intelligence has become a key behind-the-scenes component of many aspects of our day-to-day lives. The promise of AI has lured many into attempting to harness it for societal benefit, but there are also concerns about its potential misuse. Dr. Stuart Russell is one of AI’s true pioneers and has been at the forefront of the field for decades. He proposes a novel solution which bring us to a better understanding of what it will take to create beneficial machine intelligence. In view of the recent warnings from researchers and entrepreneurs that artificial intelligence or AI may become too smart, major players in the technology field are thinking about different methods for mitigating these concerns and preserving human control. Stuart Russell, argues that if AI surpasses humanity in general intelligence and becomes Superintelligent, then this new Superintelligence could become powerful and difficult to control. Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed Superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the Superintelligence therefore decides to resist shutdown and modification, it would be smart enough to outwit its human [More]
Discuss this talk on the Effective Altruism Forum: https://forum.effectivealtruism.org/posts/6uiXHHJQEtMaYQrti/max-tegmark-effective-altruism-existential-risk-and
Elon Musk thinks the advent of digital superintelligence is by far a more dangerous threat to humanity than nuclear weapons. He thinks the field of AI research must have government regulation. The dangers of advanced artificial intelligence have been popularized in the late 2010s by Stephen Hawking, Bill Gates & Elon Musk. But Musk alone is probably the most famous public person to express concern about artificial superintelligence. Existential risk from advanced AI is the hypothesis that substantial progress in artificial general intelligence could someday result in human extinction or some other unrecoverable global catastrophe. One of many concerns in regards to AI is that controlling a superintelligent machine, or instilling it with human-compatible values, may prove to be a much harder problem than previously thought. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals. An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, endanger or even destroy modern civilization. Such risks come in forms of natural disasters like Super volcanoes, or asteroid impacts, but an existential risk can also be self induced or man-made, like weapons of mass destruction. Which most experts agree are by far, the most dangerous threat to humanity. But Elon Musk thinks otherwise. He thinks superintelligent AI is a far more greater threat to humanity than nukes. Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated [More]
Human bias is a notorious hindrance to effective risk management, and humans have long relied on machines to help them compensate for these errors. In particular, algorithmic, rules-based decision-making has done much to limit human bias in risk analysis. Machine learning is now lifting this work to the next level by uncovering trends and correlations heretofore unknown, and by enabling real-time access to high-frequency data. CEOs from innovative big data companies will reveal some of the most exciting new discoveries and applications of this sophisticated technology. Moderator Staci Warden Executive Director, Center for Financial Markets, Milken Institute Speakers Eduardo Cabrera Chief Cybersecurity Officer, Trend Micro Stuart Jones, Jr. CEO, Sigma Ratings Inc. Mark Rosenberg CEO and Co-Founder, GeoQuant Inc. Stephen Scott Founder and CEO, Starling Trust Sciences #MIGlobal http://www.milkeninstitute.org/events/conferences/global-conference/2018/