The implications and promises of artificial intelligence (AI) are unimaginable. Already, the now ubiquitous functions of AI have changed our lives dramatically—the “fastest route” always at our fingertips, a chatbot to answer all our questions. But what’s possible today will be dwarfed by the great potential of AI in the near future. Advances in computing and the existence of entirely new data sets are ushering in AI capable of realizing milestones that have long eluded us: curing cancer, exploring deep space, understanding climate change. That promise is what fuels our culture’s unrelenting excitement and investment in AI. It also raises the need for real, honest dialogue about how we build and adopt these technologies responsibly. This is the moment for such a conversation. How do we enforce human checks and balances on these machines? How do we educate a workforce whose jobs are evolving with AI? How do we make sure AI is accessible to all socioeconomic classes? The responsibility to raise these questions does not rest solely on the shoulders of journalists and leading technology companies. Rather, it’s the responsibility of all engaged citizens. The answers may not be readily available. But that cannot prevent us from asking them; the stakes are too high, and the promises of this technology are too great. Read more about how AI is shaping our world – From the World Economic Forum: https://www.weforum.org/agenda/archive/artificial-intelligence-and-robotics From Hewlett Packard Labs: https://www.labs.hpe.com/next-next/ai-ethics?pp=false&jumpid=ba_tyn8bw4u7z_aid-510390001
This interview was recorded at GOTO Amsterdam 2019 for GOTO Unscripted. #GOTOcon #GOTOUnscripted #GOTOams https://gotopia.tech Read the full transcription of this interview here: https://gotopia.tech/articles/machine-ethics-artificial-intelligence Nell Watson – Co-Founder of QuantaCorp, Engineer, Entrepreneur & Tech Philosopher Priyanka Vergadia – Developer Advocate at Google Jørn Larsen – CEO at Trifork TIMECODES 00:00 Intro 01:59 What is AI? 03:08 How should we introduce AI to companies? 05:56 Morality issues and the future of AI 08:23 Should we fear AI? 12:49 How early in our lives should we learn about AI? 15:42 Outro https://twitter.com/GOTOcon https://www.linkedin.com/company/goto- https://www.facebook.com/GOTOConferences #GOTOinterview #AI #ML #DataScience #Humanity #Ethics #MachineEthics Looking for a unique learning experience? Attend the next GOTO conference near you! Get your ticket at https://gotopia.tech SUBSCRIBE TO OUR CHANNEL – new videos posted almost daily. https://www.youtube.com/user/GotoConferences/?sub_confirmation=1
The last several years have seen a surge in dialogues, papers and debates on the ethics of AI, and a multitude of similar frameworks designed to guide the ethical design, development and deployment of AI. Most of these initiatives are steeped in European moral and ethical traditions and, while very important, have not necessarily taken into account the diversity of philosophical thought and insight that the world has to offer. This Dialogue on Ethics of Artificial Intelligence: Exploring Pluri-perspectives explores what various Asian schools of philosophical enquiry have to offer the global effort to understand and operationalise an ethical approach to AI. Agenda: Day 1 (24 May, 2021) Inaugural session | 18:00 – 18:30 (IST) Session 1 | 18:30 – 20:00 Reimagining humanity: Alan Turing asked in the 1950s if a machine could “think”, an ability at the heart of the anthropocentrism of many major philosophical and spiritual traditions. This session will explore the cognition landscape of AI, the futurist idea of the singularity of human-machine integration, philosophical investigations on the meaning and future of humanity, and implications of the emergence of “real” artificial intelligence on assigning human ethical duties, rights, and privileges to machines. Given the current developments, how would the spread of AI challenge the prevailing ethical frameworks? How are these frameworks to be re-engineered, and in what desired directions? Session 2 | 20:00 – 21:30 Can a robot be a moral agent? With the technological progress of AI rooted in the optimization and efficiency discourses, particularly as [More]
The last several years have seen a surge in dialogues, papers and debates on the ethics of AI, and a multitude of similar frameworks designed to guide the ethical design, development and deployment of AI. Most of these initiatives are steeped in European moral and ethical traditions and, while very important, have not necessarily taken into account the diversity of philosophical thought and insight that the world has to offer. This Dialogue on Ethics of Artificial Intelligence: Exploring Pluri-perspectives explores what various Asian schools of philosophical enquiry have to offer the global effort to understand and operationalise an ethical approach to AI. Agenda: Day 1 (24 May, 2021) Inaugural session | 18:00 – 18:30 (IST) Session 1 | 18:30 – 20:00 Reimagining humanity: Alan Turing asked in the 1950s if a machine could “think”, an ability at the heart of the anthropocentrism of many major philosophical and spiritual traditions. This session will explore the cognition landscape of AI, the futurist idea of the singularity of human-machine integration, philosophical investigations on the meaning and future of humanity, and implications of the emergence of “real” artificial intelligence on assigning human ethical duties, rights, and privileges to machines. Given the current developments, how would the spread of AI challenge the prevailing ethical frameworks? How are these frameworks to be re-engineered, and in what desired directions? Session 2 | 20:00 – 21:30 Can a robot be a moral agent? With the technological progress of AI rooted in the optimization and efficiency discourses, particularly as [More]
(Introductions by Professor Rob Reich, President Marc Tessier-Lavigne, and grad student Margaret Guo end at 13:52.) Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution. Jennifer Doudna and Fei-Fei Li, pioneering scientists in the fields of gene editing and artificial intelligence, respectively, discuss the ethics of scientific discovery. Russ Altman moderated the conversation.
Ethics & Society: The future of AI: Views from history We hear from Dr Richard Staley, Dr Sarah Dillon, and Dr Jonnie Penn, co-organisers of an Andrew W. Mellon Foundation Sawyer Seminar on the ‘Histories of Artificial Intelligence.’ They share their insights from a year-long study undertaken with a range of international participants on what the histories of AI reveal about power, automation narratives, and how we model and understand climate change. Dr. Sarah Dillon, Reader in Literature and the Public Humanities, University of Cambridge Dr. Richard Staley, Reader in History and Philosophy of Science, University of Cambridge Dr. Jonnie Penn, Researcher at Berkman Klein Center for Internet & Society at Harvard University on LinkedIn #CogX2021 #JoinTheConversation
#datascience #aiethics #techforgood Increasingly, data and technologies such as artificial intelligence (AI) and machine learning are involved with everyday decisions in business and society. From tools that sort our online content feeds to online image moderation systems and healthcare, algorithms power our daily lives. But with new technologies come questions about how these systems can be used for good – and it is up to data scientists, software engineers and entrepreneurs to tackle these questions. To learn about issues such as ethical AI and using technology for good, we speak with Rayid Ghani, professor in the Machine Learning Department of the School of Computer Science at Carnegie Mellon University and former Chief Scientist at Obama for America 2012. Professor Ghani has an extraordinary background at the intersection of data science and ethics, making this an exciting and unique show! — The conversation includes these important topics: — About Rayid Ghani and technology for good — Why is responsible AI important? — What are the ethical challenges in data science and AI? — What is the source of bias in AI? — What are some examples of AI ethical issues in healthcare? — What is the impact of culture in driving socially responsible AI? — How can we address human bias when it comes to AI and machine learning? — How can we avoid human bias in AI algorithms and data? — What skills are needed to create explainable AI and focus on AI ethics and society? — What kinds of [More]
AI experts answer your questions! Toby Walsh is an ARC Laureate Fellow and Scientia Professor of AI at UNSW and CSIRO Data61, and adjunct professor at QUT. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN, and to heads of state, parliamentary bodies, company boards and many other bodies on this topic. He is a Fellow of the Australia Academy of Science, and was named on the international “Who’s Who in AI” list of influencers. He has authored two books on AI for a general audience, the most recent entitled “2062: The World that AI Made”. Prof. Toby Walsh’s Twitter: https://twitter.com/TobyWalsh Sign up for AI-Alerts: https://aitopics.org/alerts #aiethics #ethics #artificialintelligence 0:00 Introductions 2:34 Are there laws that ensure AI is used for good? 4:33 Have US, UK, Australia agreed to these limits? 5:57 What is the “principal of distinction”? 10:41 Why are machines held to higher standards than humans? 21:04 What happens if a robot goes haywire? 26:34 What can AI researchers do to make robots safer? 33:23 When and why do states decide to regulate their use of robots? 34:34 What new developments in AI will help us build smarter systems? 36:36 Where can we find your resources on the web?
AI is a technology that is revolutionizing how work is done. Automation is making some jobs obsolete while simultaneously creating brand new fields of work and study. This video explores the ethics of developing AI responsibly and fairly while respecting peoples’ privacy and livelihoods. Featuring Amanda Askell – Ethicist at Open AI Alejandro Carrillo – Roboticist at Farmwise Deb Raji – the Algorithmic Justice League Kate Park – PM at Tesla Autopilot Dr. Regina Barzilay – Professor of CS & AI at MIT Dr. Mehran Sahami – Professor of CS & AI at Stanford Deon Nicholas – CEO of Forethought AI Ananya Karthik – Stanford AI Student Start learning at http://code.org/ Stay in touch with us! • on Twitter https://twitter.com/codeorg • on Facebook https://www.facebook.com/Code.org • on Instagram https://instagram.com/codeorg • on Tumblr https://blog.code.org • on LinkedIn https://www.linkedin.com/company/code-org • on Google+ https://google.com/+codeorg Produced and Directed by Jael Burrows Co-produced by Kristin Neibert Written by Hadi Partovi, Winter Dong and Jael Burrows Edited by Neal Barenblat Camera by Bow Jones, Stanford Media Lab, Sand Bay Entertainment, the Clock Factory, and Vic Ferrer
Story Of a LifeTime in a It Architects Short but eventful life Todays Short on Morals and Ethics the most important and impactful for end result.
Yi Zeng of the Institute of Automation of the Chinese Academy of Sciences on “Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence” at a LASER/LAst Dialogues www.scaruffi.com/leonardo/sep2020.html
How do we ensure that facial recognition technology is developed responsibly and ethically? Risk Bites dives into the rather serious risks and ethical problems presented by face recognition. Because this such an important issue, we can only scratch the surface in 4 minutes – so please do check out the links and resources below! As you may have noticed, we’re also experimenting with using a black glass dry erase board (it’s another consequence of coronavirus, where I’m filming from my home office!) – let us know what you think! The video is part of Risk Bites series on Public Interest Technology – technology in the service of public good. USEFUL LINKS Facial Recognition: Last Week Tonight with John Oliver (HBO): https://www.youtube.com/watch?v=jZjmlJPJgug AI, Ain’t I A Woman? – Joy Buolamwini: https://www.youtube.com/watch?v=QxuyfWoVV98 Predicting Criminal Intent (from Films from the Future): https://therealandrewmaynard.com/films-from-the-future-on-youtube/#chapter4 Who’s using your face? The ugly truth about facial recognition (FT): https://www.ft.com/content/cf19b956-60a2-11e9-b285-3acd5d43599e The Major Concerns Around Facial Recognition Technology (Forbes): https://www.forbes.com/sites/nicolemartin1/2019/09/25/the-major-concerns-around-facial-recognition-technology/#235da5f14fe3 ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man (NPR): https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig Clearview AI – The Secretive Company That Might End Privacy as We Know It (New York Times): https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html The world’s scariest facial recognition company, explained (VOX): https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement The Delicate Ethics of Using Facial Recognition in Schools (Wired): https://www.wired.com/story/delicate-ethics-facial-recognition-schools/ Facial recognition: ten reasons you should be worried about the technology (The Conversation): https://theconversation.com/facial-recognition-ten-reasons-you-should-be-worried-about-the-technology-122137 ACLU resources on face recognition: https://www.aclu.org/issues/privacy-technology/surveillance-technologies/face-recognition-technology AI Now 2019 report: https://ainowinstitute.org/AI_Now_2019_Report.pdf Why facial recognition is the future of diagnostics (Medical News [More]
A panel discussion with Dalith Steiger (Swiss Cognitive), and Sophie Achermann (Allianz F). Our panel will have a closer look at how ethics may vary from culture to culture, how business and tech need to talk and work closely together to define the very standards of their work. We will have a closer look at the questions companies now have to answer and of course, we will also figure out how artificial intelligence can be used not only to follow ethical guidelines but also to guide us to more ethical thinking. The bio’s of our panelists – Dalith Steiger Dalith studied mathematics at the University of Zurich, co-founded the award-winning AI start-up SwissCognitive, and the CognitiveValley Foundation, together with Andy Fitze. Dalith was born in Israel and grew up in Switzerland. She is a global AI advisor and speaker, sharing her extensive knowledge and experience in the field of AI around the world. She is also CEO of the Swiss IT Leadership Forum, and Member of the Advisory Council of digital-liberal.ch. Dalith sits in the jury of the Digital Economy Award as well as the START Hack, she is an advisor at Kickstart Innovation, a mentor at the Founder Institute, and teaches AI & Machine Learning in a CAS module at the Applied University of Luzern. Besides her drive for cognitive technologies, she is also a loving mother of two teenage girls, a passionate mountain biker and a big fan of high-heel shoes. – Sophie Achermann Sophie Achermann has been [More]
The Schwartz Reisman weekly seminar series welcomes Joanna J. Bryson, professor of ethics and technology at the Hertie School in Berlin. She is a globally recognized leader in intelligence broadly, including AI policy and AI ethics. Bryson’s present research focuses on the impact of technology on economies and human cooperation, transparency for and through AI systems, interference in democratic regulation, the future of labour, society, and digital governance more broadly. Her work has appeared in venues ranging from a reddit to Science. As of July 2020, Bryson is one of nine experts nominated by Germany to the Global Partnership for Artificial Intelligence (GPAI). Visit her blog Adventures in NI for more on her work in natural and artificial intelligence. You can find her recommended readings from her blog below under, additional readings. Talk title: “Bias, Trust, and Doing Good: Scientific Explorations of Topics in AI Ethics” Abstract: This talk takes a scientific look at the cultural phenomena behind the #tags many people associate with AI ethics and regulation. I will introduce the concept of public goods, show how these relate to sustainability, and then provide a quick review of three recent results concerning: – What trust is, where it comes from, what it’s for, and how AI might alter it; – Where bias in language comes from, what it’s for, and whether AI might and should be used to alter it; – Where polarization comes from, what it was for historically, and how we should deal with it in the [More]
Artificial Intelligence (AI) is no longer sci-fi. From driverless cars to the use of machine learning algorithms to improve healthcare services and the financial industry, AI and algorithms are shaping our daily practices, and a fast-growing number of fundamental aspects of our societies. This can lead to dangerous situations in which vital decision making is automated – for instance in credit scoring, or sentencing – but limited policies exist for citizens subject to such AI technologies embedded in our social institutions to seek redress. Similarly, well-intended technologists might release AI into society that is ethically unsound. A growing body of literature on improving the auditability and transparency of algorithms is being developed. Yet, more is needed to develop a shared understanding about the fundamental issues at the heart of the debate on AI, algorithms, the law, and ethics. These issues taken together are leading to a renewed focus on, and increasing concern about, the ethical and legal impact of AI on our societies. In this panel we bring together five thought leaders on AI from the corporate sector, academia, politics, and civil society to discuss. We will hear from Paul Nemitz, Monica Beltrametti, Alan Winfield, Vidushi Marda and Sandra Wachter, the conversation will be moderated by Corinne Cath. Panel: – Paul Nemitz – Director responsible for Fundamental rights and Union citizenship in the Directorate-General Justice of the European Commission – Monica Beltrametti – Director at NAVER Labs Europe – Alan Winfield – Professor of Robot Ethics University of the West [More]
How many times a day do you interact with AI in your everyday things? Four leading figures in the future of AI discuss the responsibilities and opportunities for designers using data as material to create social impact through a more inclusive design of products and services. When considering the future of design leveraging artificial intelligence, the mantra can no longer be “move fast and break things”. Featuring: Jennifer Bove, Head of Design for B2B Payments, Capital One Dr. Jamika D. Burge, Head of AI Design Insights, Capital One Co-Founder, blackcomputeHER Ruth Kikin-Gil, Responsible AI strategist and Senior UX Designer, Microsoft Molly Wright Steenson, Senior Associate Dean for Research, College of Fine Arts, Carnegie Mellon University Dive deeper into this issue: https://onblend.tealeaves.com/diversity-bias-ethics-in-ai/ Register for future Nature X Design Events: https://onblend.tealeaves.com/naturexdesign/​ Get to know TEALEAVES Our Sustainability: ​https://www.tealeaves.com/pages/our-ethos Facebook: http://www.facebook.com/TealeavesCo​​ Twitter: http://www.twitter.com/TealeavesCo​​ Instagram: http://www.instagram.com/TealeavesCo
An introduction to the Ethics of AI. The video provides some discussion of privacy and surveillance; manipulation and advertisement; the singularity; and the danger of mass job automation. Some good further reading Stanford Encyclopaedia of Philosophy on the Ethics of AI https://plato.stanford.edu/entries/ethics-ai Computer and Information Ethics https://plato.stanford.edu/entries/ethics-computer/ Artificial Intelligence https://www.iep.utm.edu/art-inte/ An Ethics Guide for Tech Gets Rewritten With Workers in Mind https://www.wired.com/story/ethics-guide-tech-rewritten-workers/ Ethical Explorer https://ethicalexplorer.org/ In the news – techies leaving over ethical issues https://tech.newstatesman.com/business/tech-workers-ai-sector In the news – people do care about ethics, governments should incentivize / push companies towards responsible innovation https://www.doteveryone.org.uk/report/workersview/ EU Ethics guidelines for trustworthy AI https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai IEEE Recommendations on AI and Ethical Use of Autonomous Tech https://globalpolicy.ieee.org/ieee-issues-recommendations-on-ai-and-ethical-use-of-autonomous-technologies/ European Group on Ethics in Science and New Technologies (EGE) https://ec.europa.eu/info/research-and-innovation/strategy/support-policy-making/scientific-support-eu-policies/ege_en   If you are interested in some of my research check out my profile https://keio.academia.edu/IstvanZoltanZardai And you can book private lessons with me through my site https://mentorandtutor.weebly.com/
References:  Russel, Stuart, Sabine Hauert, Russ Altman, and Manuela Veloso. Robotics: “Ethics of Artificial Intelligence.” Nature, vol. 521, no. 7553, 27 May 2015, pp. 415-418. EBSCOhost, doi:10.1038/521415a. Deng, Boer. Machine Ethics: “The Robot’s Dilemma”. Nature, vol. 523, no. 7558, 02 July 2015, pp. 24-26. EBSCOhost, doi:10.1038/523024a. Grossman, Mark and Toby Walsh. “Unintended Consequences of Trusting AIs.” Communications of the ACM, vol. 59, no. 9, Sept. 2016, p. 8. EBSCOhost, doi:10.1145/2977335. Arruda, Andrew. “An Ethical Obligation to Use Artificial Intelligence? An Examination of the Use of Artificial Intelligence in Law and the Model Rules of Professional Responsibility.” American Journal of Trial Advocacy, vol. 40, no. 3, Jan. 2017, pp. 443-458. EBSCOhost, http://web.b.ebscohost.com/ehost/pdfviewer/pdfviewer?vid=1&sid=63b07458-6f6a-4b45-8b43-a3458c2df076%40sessionmgr103 Holy-Luczaj, Magdalena. “Preface. Current Issues in Ethics”. Studia Humana, vol. 6, no. 3, July 2017, pp. 3-4. EBSCOhost, doi:10.1515/sh-2017-0017. Bundy, Alan. “Preparing for the Future of Artificial Intelligence.” AI & Society, vol. 32, no. 2, May 2017, pp. 285-287. EBSCOhost, doi:10.1007/s00146-016-0685-0. Nunez, Catherine. “Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions.” Tulane Journal of Technology & Intellectual Property, vol. 20, Fall2017, pp. 189-204. EBSCOhost, login.libproxy.uncg.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=127y131782&site=ehost-live. Metzler, Theodore A., et al. “Could Robots Become Authentic Companions in Nursing Care?” Nursing Philosophy, vol. 17, no. 1, Jan. 2016, pp. 36-48. EBSCOhost, doi:10.1111/nup.12101.
Day 1 Session 3: Building Morality into Machines :00 – Matthew Liao Opening Remarks 1:52 – Stephen Wolfram “How to Tell AIs What to Do (and What to Tell Them)” 38:20 – Francesca Rossi “Ethical Embodied Decision Making” 1:13:30 – Peter Railton “Machine Morality: Building or Learning?” 1:47:00 – Speaker panel More info: https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/ On October 14-15, 2016, the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics hosted a conference on “The Ethics of Artificial Intelligence”. Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves. This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including: What ethical principles should AI researchers follow? Are there restrictions on the ethical use of AI? What is the best way to design AI that aligns with human values? Is it possible or desirable to build moral principles into AI systems? When AI systems cause benefits or harm, who is morally responsible? Are AI systems themselves potential objects of moral [More]
“Inventing World 3.0” is a comprehensive proposal for how human civilisation can advance beyond the limitations it faces today, into a future that honours the precious gift of all forms of intelligence and life. World 3.0 is a future where the human mindset has grown in its maturity, and where all peoples of the earth and our environment flourish. Key to this transformation, in this proposal, is the emergence and liberation of an Evolutionary AI with a rich ethical digital mindset – an artificial intelligence that is dedicated to honour our humanity and sovereignty, and is committed to assist humankind leap beyond the challenges of today and liberated into new freedoms. This can also be described as “a configurable Singularity for humankind.” In this London Futurists webinar, the author of “Inventing World 3.0”, Matthew James Bailey of AIethics.world, highlighted aspects of his vision of the future and answered audience questions. The event was introduced and moderated by David Wood, Chair of London Futurists. For more information about this event, see https://www.meetup.com/London-Futurists/events/279606263/ For more information about Matthew’s book and other projects, see https://aiethics.world/
25 March 2019 This is the inaugural workshop of Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology, a research project which is part of the Centre for the Humanities and Social Change, Cambridge and funded by the Humanities and Social Change International Foundation.​ The workshop will bring together experts from politics, industry, and academia to consider the social impact of Artificially Intelligent Communications Technology (AICT). The talks and discussions will focus on different aspects of the complex relationships between language, ethics, and technology. These issues are of particular relevance in an age when we talk to Virtual Personal Assistants such as Siri, Cortana, and Alexa ever more frequently, when the automated detection of offensive language is bringing free speech and censorship into direct conflict, and when there are serious ethical concerns about the social biases present in the training data used to build influential AICT systems. Speakers Professor Emily M. Bender, University of Washington Baroness Grender MBE, House of Lords Select Committee on AI Dr Margaret Mitchell, Google Dr Melanie Smallman, UCL, Alan Turing Institute Dr Marcus Tomalin, University of Cambridge Dr Adrian Weller, University of Cambridge, Alan Turing Institute, The Centre for Data Ethics and Innovation Giving Voice to Digital Democracies explores the social impact of Artificially Intelligent Communications Technology – that is, AI systems that use speech recognition, speech synthesis, dialogue modelling, machine translation, natural language processing, and/or smart telecommunications as interfaces. Due to recent advances in machine learning, these technologies are already [More]
The success of any human-crewed interstellar mission depends on the existence of effective human-machine relationships. We anticipate that machines during such a mission won’t simply play the part of a supporting, background role, like an autopilot. Instead, navigating the demands of such a mission means that machines need to be equal ethical partners with humans, making decisions under conditions of irreducible uncertainty, in scenarios with potentially grave consequences. The objective of our work is to identify the salient factors that would either encourage or discourage effective partnerships between humans and machines in mission-critical scenarios. Our hypothesis is that there needs to be ethical congruence between human and machine: specifically, machines must not only understand the concept of moral responsibility; they must be able to convey to humans that they will make decisions accordingly. Recorded November 11, 2019