Jean-Francois Bonnefon, Toulouse School of Economics, held a keynote, “The Moral Machine Experiment”, at IJCAI-ECAI 2018, the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, the premier international gathering of researchers in AI.

The success of any human-crewed interstellar mission depends on the existence of effective human-machine relationships. We anticipate that machines during such a mission won’t simply play the part of a supporting, background role, like an autopilot. Instead, navigating the demands of such a mission means that machines need to be equal ethical partners with humans, making decisions under conditions of irreducible uncertainty, in scenarios with potentially grave consequences.

The objective of our work is to identify the salient factors that would either encourage or discourage effective partnerships between humans and machines in mission-critical scenarios. Our hypothesis is that there needs to be ethical congruence between human and machine: specifically, machines must not only understand the concept of moral responsibility; they must be able to convey to humans that they will make decisions accordingly.

Recorded November 11, 2019

ACHLR ‘The Ethics of Artificial Intelligence: Moral Machines’ Public Lecture

Learn more: https://www.qut.edu.au/law/research

Artificial Intelligence and Experience Series (AIEX):
“Do People Perceive Machines as Moral Agents?”
Bert F. Malle
Department of Cognitive, Linguistic, and Psychological Sciences Brown University

October 25, 2018

Denise Howell, J. Michael Keyes and Amanda Levendowski discuss the Moral Machine, an MIT Media Lab platform “for gathering a human perspective on moral decisions made by machine intelligence.”
For the full episode, visit https://twit.tv/twil/356

Prof. Edmond Awad
(Institute for Data Science and Artificial Intelligence at the University of Exeter)

Abstract: 
I describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game enabled us to gather 40 million
decisions from 3 million people in 200 countries/territories. I report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. I also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. I discuss how these three layers of
preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics. Finally, I describe other follow up work that build on this project.

Bio: Edmond Awad is a Lecturer (Assistant Professor) in the Department of Economics and the Institute for Data Science and Artificial Intelligence at the University of Exeter. He is also an
Associate Research Scientist at the Max Planck Institute for Human Development, and is a Founding Editorial Board member of the AI and Ethics Journal, published by Springer. Before joining the University of Exeter, Edmond was a Postdoctoral Associate at MIT Media Lab (2017-2019). In 2016, Edmond led the design and development of Moral Machine,  a website that gathers human decisions on moral dilemmas faced by driverless cars. The website has been visited by over 4 million users, who contributed their judgements on 70 million dilemmas. Another website that
Edmond co-created, called MyGoodness, collected judgements over 2 million charity dilemmas.
Edmond’s work appeared in major academic journals, including Nature, PNAS, and Nature Human Behaviour, and it has been covered in major media outlets including The Associated Press, The
New York Times, The Washington Post, Der Spiegel, Le Monde and El Pais. Edmond has a bachelor degree (2007) in Informatics Engineering from Tishreen University (Syria), a master’s degree (2011) in Computing and Information Science and a PhD (2015) in Argumentation and Multi-agent systems from Masdar Institute (now Khalifa University; UAE), and a master’s degree (2017) in Media Arts and Sciences from MIT. Edmond’s research interests are in the areas of AI, Ethics, Computational Social Science and Multi-agent Systems.

Speaker: Toby Walsh

The AI Revolution will transform our political, social and economic systems. It will impact not just the workplace, but many other areas of our society like politics and education. There are many ethical challenges ahead, ensuring that machines are fair, transparent, trustworthy, protective of our privacy and respect many other fundamental rights. Education is likely to be one of the main tools available to prepare for this future. Toby Walsh, Scientia Professor of Artificial Intelligence at Data61, University of New South Wales will argue that a successful society will be one that embraces the opportunity that these technologies promise, but at the same time prepares and helps its citizens through this time of immense change. Join him in this session, aiming to stimulate debate and discussion about AI, education and 21st century skill needs.

www.oeb.global

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? We will tackle these questions and more as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

Subscribe to our YouTube Channel for all the latest from WSF.
Visit our Website: http://www.worldsciencefestival.com/
Like us on Facebook: https://www.facebook.com/worldsciencefestival
Follow us on twitter: https://twitter.com/WorldSciFest

Original Program Date: June 4, 2016
MODERATOR: Bill Blakemore
PARTICIPANTS: Fernando Diaz, Colonel Linell Letendre, Gary Marcus, Matthias Scheutz, Wendell Wallach

Can Life and Death Decisions Be Coded? 00:00

Siri… What is the meaning of life? 1:49

Participant introductions 4:01

Asimov’s Three Laws of Robotics 6:22

In 1966 ELIZA was one of the first artificial intelligence systems. 10:20

What is ALPHAGO? 15:43

TAY Tweets the first AI twitter bot. 19:25

Can you test learning Systems? 26:31

Robots and automatic reasoning demonstration 30:31

How do driverless cars work? 39:32

What is the trolley problem? 49:00

What is autonomy in military terms? 56:40

Are landmines the first automated weapon? 1:10:30

Defining how artificial intelligence learns 1:16:03

Using Minecraft to teach AI about humans and their interactions 1:22:27

Should we be afraid that AI will take over the world? 1:25:08

The William G. McGowan Charitable Fund is a philanthropic family foundation established in 1993 to perpetuate William McGowan’s tradition of compassionate philanthropy and ethical leadership. Today, the Fund preserves the legacy of William McGowan while embodying his tireless spirit and determined optimism.

To this end, the Fund promotes, nurtures, and supports initiatives in three program areas: Education, Human Services, and Healthcare & Medical Research. Through the McGowan Fellows Program, the Fund supports and inspires emerging business leaders. In partnership with the nation’s leading graduate business programs, the Fund aims to imbue these future leaders with a framework for ethical decision-making and establish an ongoing dialogue on the importance of ethical practices.

Chair: Ned Block

Links to panelists talks:
S. Matthew Liao (NYU, Bioethics)
https://www.youtube.com/watch?v=qPIqZ1rs-j8

Eric Schwitzgebel (UC Riverside, Philosophy) and Mara Garza (UC Riverside, Philosophy)
https://www.youtube.com/watch?v=54-FI4qpwa8

John Basl (Northeastern, Philosophy) and Ronald Sandler (Northeastern, Philosophy)
https://www.youtube.com/watch?v=m4OUitBEoiw

Machine ethics is an emerging discipline that enables ethical problems to be refined into something computational, that machines and humans can both understand rationally. New technologies can make ethical decisions calculable and transactional for the first time. Furthermore, Artificial Moral Advisors can help inform human beings of the potential trade-offs and repercussions of their decisions, and help people live more. Nell Watson believes these new capabilities self-reinforce each other, and have the potential of reshaping the moral fabric of our society within a generation.
– This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

How do different cultures value human life? To find out, researchers created a viral online experiment to gather data from millions of participants across the world. Some values generalised across cultures, but others came as a surprise. Find out more in this Nature Video.

Take part in the experiment here: http://moralmachine.mit.edu

Find the original research paper here: https://www.nature.com/articles/s41586-018-0637-6

*CORRECTION* The spelling of Massachusetts at 00:38 is incorrect. We regret the error.

SixArticles: Issue #8 AI and FutureOfWork

By pairing the power of AI systems and human wisdom, scientists at Duke University hope to offer a tool for strengthening our moral capacities.

Learn more at http://diverseintelligences.com/.

VR Robots like a Human. really???
Designing a Moral Machine

WATCH FULL EPISODE: https://youtu.be/NYNN87txLWQ

.@SamHarrisOrg on how @WestworldHBO crosses uncanny valley of robotics & raises moral issues & questions about humanity-w/@jason-THX @wistia

Today’s guest is Sam Harris, philosopher, neuroscientist and best-selling author of books including “Waking Up,” “The End of Faith,” “Letter to a Christian Nation,” and “The Moral Landscape.” Jason and Sam explore a wide range of topics, including the ethics of robots, the value of meditation, Trump’s lies, and his most recent obsession AI, which stemmed from an initial conversation with Elon Musk. Sam argues that the threat of uncontrolled AI is one of the most pressing issues of our time and poses the question: Can we build AI without losing control over it? The two then discuss why meditation is so important for entrepreneurs and business people. Sam has built his brand and fan base around radical honesty and authenticity, so the conversation naturally segues to Trump and his lies. This is only the first of two parts, so stay tuned for much more.

For full show notes, subscribe to http://thisweekinstartups.com/about/#allsubscribe

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×