How do we ensure that facial recognition technology is developed responsibly and ethically? Risk Bites dives into the rather serious risks and ethical problems presented by face recognition. Because this such an important issue, we can only scratch the surface in 4 minutes – so please do check out the links and resources below! As you may have noticed, we’re also experimenting with using a black glass dry erase board (it’s another consequence of coronavirus, where I’m filming from my home office!) – let us know what you think! The video is part of Risk Bites series on Public Interest Technology – technology in the service of public good. USEFUL LINKS Facial Recognition: Last Week Tonight with John Oliver (HBO): AI, Ain’t I A Woman? – Joy Buolamwini: Predicting Criminal Intent (from Films from the Future): Who’s using your face? The ugly truth about facial recognition (FT): The Major Concerns Around Facial Recognition Technology (Forbes): ‘The Computer Got It Wrong’: How Facial Recognition Led To False Arrest Of Black Man (NPR): Clearview AI – The Secretive Company That Might End Privacy as We Know It (New York Times): The world’s scariest facial recognition company, explained (VOX): The Delicate Ethics of Using Facial Recognition in Schools (Wired): Facial recognition: ten reasons you should be worried about the technology (The Conversation): ACLU resources on face recognition: AI Now 2019 report: Why facial recognition is the future of diagnostics (Medical News [More]
Ahmad-Reza Sadeghi, Computer Science, Technische Universität Darmstadt and Head of System Security Lab Claudia Diaz, Associate Professor, Computer Security and Industrial Cryptography group, KU Leuven Mark Girolami, Statistics, Imperial College London and Programme Director of The Alan Turing Institute-Lloyd’s Register Foundation Programme in Data-Centric Engineering Moderator: David Sands, Chalmers University of Technology Chalmers Initiative Seminar: Digitalisation – Security & Privacy | Machine Intelligence Thursday 15 March 2018 The initiative seminar is a non-commercial arrangements by Chalmers University of Technology, Area of Advance Information and Communication Technology.
The Delivering the Human Future Conference was a worldwide conference held in March 2021 on the existential threats that humanity faces as a species and the solutions we need to overcome them. View the Plenary Letter and sign the Petition calling for action today! The Delivering the Human Future Conference was hosted by: The Council for the Human Future (CHF) The Millennium Alliance for Humanity and the Biosphere (MAHB) The Common Home of Humanity (CHH)
How dangerous could artificial intelligence turn out to be, and how do we develop ethical AI? Risk Bites dives into AI risk and AI ethics, with ten potential risks of AI we should probably be paying attention to now, if we want to develop the technology safely, ethically, and beneficially, while avoiding the dangers. With author of Films from the Future and ASU professor Andrew Maynard. Although the video doesn’t include the jargon usually associated with AI risk and responsible innovation, the ten risks listed address: 0:00 Introduction 1:07 Technological dependency 1:25 Job replacement and redistribution 1:43 Algorithmic bias 2:03 Non-transparent decision making 2:27 Value-misalignment 2:44 Lethal Autonomous Weapons 2:59 Re-writable goals 3:11 Unintended consequences of goals and decisions 3:31 Existential risk from superintelligence 3:51 Heuristic manipulation There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs. The video is part of Risk Bites series on Public Interest Technology – technology in the service of public good. #AI #risk #safety #ethics #aiethics USEFUL LINKS AI Asilomar Principles Future of Life Institute Stuart Russell: Yes, We Are Worried About the Existential Risk of Artificial Intelligence (MIT Technology Review) We Might Be Able to 3-D-Print an Artificial Mind One Day (Slate Future Tense) The Fourth Industrial Revolution: what it means, how to respond. Klaus Schwab (2016) ASU [More]
Affective Computing: Opportunities and risks of emotional AI. Presentation given at CogX 2019, on the Ethics Stage Lucia Komljen; Head of insight & Strategy for Product Innovation, Telefonica Noel Sharkey; Co-Director, Professor, Chairman, The University of Sheffield Ross Harper; CEO & Founder, Limbic Minter Dial; Author, Various Maja Pantic; Professor of Affective and Behavioural Computing, Imperial College London 04_10_ETHICS: 04_10_ETHICS_day2
New videos DAILY: Join Big Think Edge for exclusive video lessons from top thinkers and doers: ———————————————————————————- We have no guarantee that a superintelligent A.I. is going to do what we want. Once we create something many times more intelligent than we are, it may be “insane” to think we can control what it does. What’s the best bet to ensure superintelligent A.I. remains compliant with humans and does good works, such as advance medicine? To raise it in a way that’s imbued with compassion and understanding, says Goertzel. One way to limit “people doing bad things out of frustration,” it may be advantageous for the entire world to be plugged into the A.I. economy so that developers, from whatever country, can monetize their codes. ———————————————————————————- BEN GOERTZEL Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence. ———————————————————————————- TRANSCRIPT: BEN GOERTZEL: We can have no guarantee that a super intelligent AI is going to do what we want. Once we’re creating something ten, a hundred, a thousand, a million times more intelligent than we are it would be insane to think that we could really like rigorously control what it [More] Winter Intelligence 2012 Oxford University Video thanks to Adam Ford, Extended Abstract: The gradually increasing sophistication of semi-autonomous and autonomous robots and virtual agents has led some scholars to propose constraining these systems’ behaviors with programmed ethical principles (“machine ethics”). While impressive machine ethics theories and prototypes have been developed for narrow domains, several factors will likely prevent machine ethics from ensuring positive outcomes from advanced, cross-domain autonomous systems. This paper critically reviews existing approaches to machine ethics in general and Friendly AI in particular (an approach to constraining the actions of future self-improving AI systems favored by the Singularity Institute for Artificial Intelligence), finding that while such approaches may be useful for guiding the behavior of some semi-autonomous and autonomous systems in some contexts, these projects cannot succeed in guaranteeing ethical behavior and may introduce new risks inadvertently. Moreover, while some incarnation of machine ethics may be necessary for ensuring positive social outcomes from artificial intelligence and robotics, it will not be sufficient, since other social and technical measures will also be critically important for realizing positive outcomes from these emerging technologies. Building an ethical autonomous machine requires a decision on the part of the system designer as to which ethical framework to implement. Unfortunately, there are currently no fully-articulated moral theories that can plausibly be realized in an autonomous system, in part because the moral intuitions that ethicists attempt to systematize are not, in fact, consistent across all domains. Unified ethical theories are all either too [More]
Mark Zuckerberg’s presentation of his Jarvis A.I. is more robotic than the house itself. Xiaoice chatbot has millions of Chinese men falling in love with it. Amazon will teach your kids to say “please” and “thank you.” Subscribe to TomoNews ►► Watch more TomoNews ►► TomoNews is your best source for real news. We cover the funniest, craziest and most talked-about stories on the internet. If you’re laughing, we’re laughing. If you’re outraged, we’re outraged. We tell it like it is. And because we can animate stories, TomoNews brings you news like you’ve never seen before. Top TomoNews Stories – The most popular videos on TomoNews! You Idiot! – People doing stupid things Recent Uploads – The latest stories brought to you by TomoNews Ultimate TomoNews Compilations – Can’t get enough of TomoNews? This playlist is for you! New videos every day Thanks for watching TomoNews! Like TomoNews on Facebook: Follow us on Twitter: @tomonewsus Follow us on Instagram: @tomonewsus Visit our website for all the latest videos: Check out our Android app: Check out our iOS app: Get top stories delivered to your inbox every day: