Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Artificial intelligence is all around us … and the future will only bring more of it. How can we ensure the AI systems we build are responsible, safe and sustainable? Ethical AI expert Genevieve Bell shares six framing questions to broaden our understanding of future technology — and create the next generation of critical thinkers and doers. The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. You’re welcome to link to or embed these videos, forward them to others and share these ideas with people you know. Follow TED on Twitter: http://twitter.com/TEDTalks Like TED on Facebook: http://facebook.com/TED Subscribe to our channel: http://youtube.com/TED TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy (https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy). For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com
Olivia Gambelin is an AI Ethicist and the Founder of Ethical Intelligence who works to bring ethical analysis into tech development to create human-centric innovation. As the Chief Executive Officer of Ethical Intelligence, she leads a remote team of over thirty experts in the #TechEthics field. Olivia is the new guest in this Dinis Guarda citiesabc openbusinesscouncil YouTube Series. Hosted by Dinis Guarda. Olivia Gambelin Interview Questions 1. An introduction from you – background, overview, education… 2. Education background academia and industry? 3. MS Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars. 4. Can you tell us about your Company Ethical Intelligence? 5. When we look at evolution and AI tech multiple challenges how do you look at it from a philosophical perspective? 6. How do you look at the grey areas of ethics around technology and AI? Olivia Gambelin Biography Besides her role as the founder of Ethical Intelligence and #AIEthicist, she is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates as well as the Founding Editorial Board of Springer Nature’s AI and Ethics Journal. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University. Olivia Gambelin Links & Resources https://www.linkedin.com/in/oliviagambelin/ www.oliviagambelin.com https://www.icgn.org/speakers/olivia-gambelin-founder-ceo-ethical-intelligence#:~:text=Olivia%20is%20an%20AI%20Ethicist,technological%20solutions%20we%20can%20trust. https://www.machine-ethics.net/podcast/probability-moral-responsibility-with-olivia-gambelin/ https://dma.org.uk/article/covid-19-contact-tracing-app-series-olivia-gambelin https://technation.io/news/ethical-intelligence-what-does-an-ai-ethicist-do-anyway/ https://essentials.news/ai/ethics/article/42-probability-moral-responsibility-olivia-gambelin-1e576e6670 https://twitter.com/oliviagambelin?lang=en https://www.instagram.com/oliveyou316/?hl=en [More]
Over the past year, discourse about the ethical risks of machine learning has largely shifted from speculative fear about rogue super intelligent systems to critical examination of machine learning’s propensity to exacerbate patterns of discrimination in society. This talk explains how and why bias creeps into supervised machine learning systems and proposes a framework businesses can apply to hold algorithmic systems accountable in a way that is meaningful to people impacted by systems. You’ll learn why it’s important to consider bias throughout the entire machine learning product lifecycle (not just algorithms), how to assess tradeoffs between accuracy and explainability, and what technical solutions are available to reduce bias and promote fairness.
Artificial intelligence provides a range of new opportunities in day-to-day life, but what are the downfalls? Frank Rudzicz is an artificial intelligence researcher at the University of Toronto and at the Vector Institute. The Vector Institute is an independent, not-for-profit corporation dedicated to research in the field AI, with special focus in machine and deep learning. ABOUT MaRS DISCOVERY DISTRICT MaRS is the world’s largest innovation hub located in Toronto. We support impact-driven startups in health, cleantech, fintech and enterprise. ————————————————————— ▶ SUBSCRIBE TO OUR NEWSLETTER ▶ https://marsdd.it/2BBqDoC ————————————————————— FOLLOW MaRS ▹ INSTAGRAM ‣ http://instagram.com/marsdiscoverydistrict TWITTER ‣ http://twitter.com/marsdd LINKEDIN ‣ https://www.linkedin.com/company/mars-discovery-district FACEBOOK ‣ http://facebook.com/marsdiscoverydistrict
CAREER PATHWAY – A MALAYALAM VLOG FOR CAREER, PLACEMENT & JOBS To get Job opportunities , Pl visit my Job Portal https://jobsbrij.com/ To Learn more about Cyber Security / Ethical Hacking etc , Pl contact School of Cyber Defense – 8410393333 http://schoolofcyberdefense.com/ or else pl fill the below form https://forms.gle/ibxQgkuocBkkqxEBA To attend free classes by Dr Brijesh 1) Online money making (ഓൺലൈനിലൂടെ മികച്ച വരുമാനം) 2) Success mantra for Dream Job (സ്വപ്ന ജോലിക്കു വിജയ മന്ത്രങ്ങൾ) and many other courses conducted on Weekends…. Register at http://brijeshacademy.com/ To join the Telegram jobs group “Jobs by Prof Brijesh’ free , Click the below link https://t.me/jobsbyproffbrijesh To join the whatsapp jobs group “Jobs by Prof Brijesh’ free , Click the below link https://chat.whatsapp.com/IEGQVpB0PGC6sRu5i5y0Z2 or https://chat.whatsapp.com/EwTKboPaSx7EWhzFmFRy8d For business Enquiries Mail : firstname.lastname@example.org Linked in : https://www.linkedin.com/in/brijesh-john-20010615/ Facebook : https://www.facebook.com/brijeshisonline In this Video Dr Brijesh explain about Cyber Security and Ethical hacking.Dr Brijesh mentions about CEH – Certified Ethical Hacker Certification. ഡോ. ബ്രിജേഷ് നൽകുന്ന giveaway (Redmi 8 A Smart phone , Bombay Dyeing bedsheets, Srinz College bags) നേടുവാനായി ദീപാവലി വരെ ഈ ചാനലിൽ പ്രസിദ്ധീകരിക്കുന്ന വീഡിയോകൾ ലൈക്ക് , ഷെയർ ചെയ്തു ,കമന്റ് ചെയ്യുക.ഈ വീഡിയോ ചാനൽ സബ്സ്ക്രൈബ് ചെയ്യുക. അതുകൂടാതെ വിജയസാധ്യത വർദ്ധിപ്പിക്കാനായി ഞങ്ങളുടെ ജോബ് സൈറ്റ് https://jobsbrij.com/ ഇൽ സൗജന്യമായി രജിസ്റ്റർ ചെയ്യുക. അതേപോലെ സോഷ്യൽ മീഡിയ അക്കൗണ്ടുകൾ – ഫേസ്ബുക് , ഇൻസ്റ്റാഗ്രാം , ലിങ്ക്ഡ്ഇൻ ലൈക് ചെയ്യുക, ഫോളോ ചെയ്യുക , ഷെയർ ചെയ്യുക ,കമന്റ് ചെയ്യുക. ഇതോടൊപ്പം ഞങ്ങളുടെ മറ്റു ചാനലുകൾ Jobs Pathway , See The World ഇവയിലും നിങ്ങൾക്ക് സബ്സ്ക്രൈബ് ചെയ്ത ,കമന്റ് ചെയ്തു വിജയസാധ്യത വർദ്ധിപ്പിക്കാം . To Subscribe to our travel vlog (See The World) of Dr. Brijesh , pl [More]
Speaker: Toby Walsh The AI Revolution will transform our political, social and economic systems. It will impact not just the workplace, but many other areas of our society like politics and education. There are many ethical challenges ahead, ensuring that machines are fair, transparent, trustworthy, protective of our privacy and respect many other fundamental rights. Education is likely to be one of the main tools available to prepare for this future. Toby Walsh, Scientia Professor of Artificial Intelligence at Data61, University of New South Wales will argue that a successful society will be one that embraces the opportunity that these technologies promise, but at the same time prepares and helps its citizens through this time of immense change. Join him in this session, aiming to stimulate debate and discussion about AI, education and 21st century skill needs. www.oeb.global
How dangerous could artificial intelligence turn out to be, and how do we develop ethical AI? Risk Bites dives into AI risk and AI ethics, with ten potential risks of AI we should probably be paying attention to now, if we want to develop the technology safely, ethically, and beneficially, while avoiding the dangers. With author of Films from the Future and ASU professor Andrew Maynard. Although the video doesn’t include the jargon usually associated with AI risk and responsible innovation, the ten risks listed address: 0:00 Introduction 1:07 Technological dependency 1:25 Job replacement and redistribution 1:43 Algorithmic bias 2:03 Non-transparent decision making 2:27 Value-misalignment 2:44 Lethal Autonomous Weapons 2:59 Re-writable goals 3:11 Unintended consequences of goals and decisions 3:31 Existential risk from superintelligence 3:51 Heuristic manipulation There are many other potential risks associated with AI, but as always with risk, the more important questions are associated with the nature, context, type of impact, and magnitude of impact of the risks; together with relevant benefits and tradeoffs. The video is part of Risk Bites series on Public Interest Technology – technology in the service of public good. #AI #risk #safety #ethics #aiethics USEFUL LINKS AI Asilomar Principles https://futureoflife.org/ai-principles/ Future of Life Institute https://futureoflife.org/ Stuart Russell: Yes, We Are Worried About the Existential Risk of Artificial Intelligence (MIT Technology Review) https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/ We Might Be Able to 3-D-Print an Artificial Mind One Day (Slate Future Tense) http://www.slate.com/blogs/future_tense/2014/12/11/_3d_printing_an_artificial_mind_might_be_possible_one_day.html The Fourth Industrial Revolution: what it means, how to respond. Klaus Schwab (2016) https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond ASU [More]
Today, we’re chatting with AI expert, activist, and author Toby Walsh. Toby is a leading researcher in Artificial Intelligence. Dubbed a “rock star” of Australia’s digital revolution by The Australian newspaper, Toby certainly has the backing of a long list of credentials to warrant the title. He is Scientia Professor of Artificial Intelligence at UNSW, leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research, and is Guest Professor at TU Berlin. He has been elected a fellow of the Australian Academy of Science and has won the prestigious Humboldt research award as well as the NSW Premier’s Prize for Excellence in Engineering and ICT. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. During the discussion, Toby opens with his thoughts on the future of AI, when and how it could surpass humans, and how little we know about how it could impact our jobs. He also discusses with us other ways AI can impact society for better or for worse. How can our data privacy impact AI and how can microtargeting using this data change the course of history? What ethics should we stand by? Who is responsible for decisions made by autonomous machines? And could government regulation actually help—rather than hinder—innovation? – Insight into Toby’s most recent book, “2062: The World That AI Made”, touted as the book to read, bar none, on AI and society. – Will AI have consciousness? Is consciousness a biological construct? [More]
View full lesson: http://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin Self-driving cars are already cruising the streets today. And while these cars will ultimately be safer and cleaner than their manual counterparts, they can’t completely avoid accidents altogether. How should the car be programmed if it encounters an unavoidable accident? Patrick Lin navigates the murky ethics of self-driving cars. Lesson by Patrick Lin, animation by Yukai Du.
Artificial Intelligence (AI) technology poses serious ethical risks to individuals and society. Cansu Canca, Philosopher and AI Ethics Lab’s Founder & Director, explains how we can deal with these risks more effectively if we approach them as puzzles and solve them using tools from applied philosophy. Learn more at http://www.tedxcambridge.com Cansu Canca is a philosopher and the founder and director of the AI Ethics Lab. She leads teams of computer scientists, philosophers, and legal scholars to provide ethics analysis and guidance to researchers and practitioners. She holds a Ph.D. in philosophy from the National University of Singapore specializing in applied ethics. Her area of work is in the ethics of technology and population-level bioethics with an interest in policy questions. Prior to the AI Ethics Lab, she was a lecturer at the University of Hong Kong, and a researcher at the Harvard Law School, Harvard School of Public Health, Harvard Medical School, Osaka University, and the World Health Organization. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
Ethics of AI Lab Centre for Ethics, University of Toronto, March 20, 2018 http://ethics.utoronto.ca Kathryn Hume intergrate.ai
Self-driving cars are already cruising the streets. Fully autonomous vehicles have the potential to benefit our world by increasing traffic efficiency, reducing pollution, and beyond all eliminating up to 90% of traffic accidents. Not all crashes will be avoided, though, and some crashes will require AVs to make difficult ethical decisions in cases that involve unavoidable harm. For example, the AV may avoid harming several pedestrians by swerving and sacrificing a passerby, or the AV may be faced with the choice of sacrificing its own passenger to save one or more pedestrians. What would you do in a situation like that? How can a person decide between two really bad options? We will explore this disturbing ethical dilemma through Bentham and Kant’s philosophies and we will seek some insights on our true inner ethics by examining some of the scientific research on the topic. In this video: 1. The Trolley problem 0:00 2. Autonomous vehicles – potential benefits and problems 0:53 3. Iyad Rahwan’s research: Bentham and Kant’s philosophies 2:27 4. Immersive virtual reality study 3:56 5. Problems with the “value of life” approach 4:54 6. The big questions ahead us 5:39 Interested in futuristically oriented content? Check our previous video exploring the question of whether we should expect mind blowing changes in the near future ( http://y2u.be/HfM5HXpfnJQ) If you want to see more from us SUBSCRIBE to our channel: http://bit.ly/2oBn4bL and find us on: FACEBOOK: https://www.facebook.com/luscid/ TWITTER: https://twitter.com/LuscidChannel Research and script: Irina Georgieva Art, editing and narration: Daniel Stamenov [More]
Learn from Googlers who are working to ensure that a robust framework for ethical AI principles are in place, and that Google’s products do not amplify or propagate unfair bias, stereotyping, or prejudice. Hear about the research they are doing to evolve artificial intelligence towards positive goals: from accountability in the ethical deployment of AI, to the tools needed to actually build them, and advocating for the inclusion of concepts such as race, gender, and justice to be considered as part of the process. Watch more #io19 here: Inspiration at Google I/O 2019 Playlist → https://goo.gle/2LkBwCF TensorFlow at Google I/O 2019 Playlist → http://bit.ly/2GW7ZJM Google I/O 2019 All Sessions Playlist → https://goo.gle/io19allsessions Learn more on the I/O Website → https://google.com/io Subscribe to the TensorFlow Channel → https://bit.ly/TensorFlow1 Get started at → https://www.tensorflow.org/ Speaker(s): Jen Gennai, Margaret Mitchell, Jamila Smith-Loud TC3A01
Fascinating discussion on ethical progress, and AI. Points: – Future directions in ethical progress – Concern for things we cannot regulate – The ultimate utility function: Maximizing neg-entropy? Many thanks for watching! Consider supporting SciFuture by: a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture b) Donating – Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22 – Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b – Patreon: https://www.patreon.com/scifuture c) Sharing the media SciFuture creates: http://scifuture.org Kind regards, Adam Ford – Science, Technology & the Future
The inspiration for Kelly McGillis’ character in Top Gun, Christine Fox is the Assistant Director for Policy and Analysis of the Johns Hopkins University Applied Physics Laboratory. Prior to joining APL, she served as Acting Deputy Secretary of Defense from December 2013 to May 2014, making her the highest-ranking female official in history to serve in the Department of Defense. Ms. Fox is a three-time recipient of the Department of Defense Distinguished Service Medal. She has also been awarded the Department of the Army’s Decoration for Distinguished Civilian Service. Ms. Fox currently serves on the Board of Trustees for the Woods Hole Oceanographic Institution, the Board on Mathematical Sciences and their Applications (BMSA) at the National Research Council, and is a member of the Council on Foreign Relations. With nearly 6,000 staff at what is the nation’s largest University Affiliated Research Center, Johns Hopkins APL makes critical contributions to a wide variety of national and global technical and scientific challenges. As the Director of Policy and Analysis, Ms. Fox leads efforts to increase APL’s engagement on technical policy issues and directs research and analysis projects on behalf of the Department of Defense, the intelligence community, the National Aeronautics and Space Administration, and other federal agencies. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx
Ramana Polavarapu is the Vice-President of Goldman Sachs, India and the team lead of Artificial Intelligence. He is also a Machine Learning scientist and has secured a PhD in Economics. With the speculation of morally flawed artificial intelligence pervading mainstream discussion today, Ramana Polavarapu uses plenty of familiar examples to pick apart the fears of a technological singularity, how far away we are from one, and most importantly, if we’re really even going there. Exploring the present day of intelligent technology disrupting ordinary life, he discusses possible preliminary steps to test an AI for ethical conformance, and how we could all benefit from being vigilant, regardless of an imminent AI takeover of the world. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx Ramana Polavarapu is a Vice-President at Goldman Sachs, India and the team lead of Artificial Intelligence. He is also a Machine Learning scientist and has secured a PhD in Economics from The University of California. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx