After being inspired by a talk given by Joscha Bach at the Human-Level AI Conference 2018, GoodAI thought it would be a good idea to conduct an informal interview with him to further explore some ideas. We teamed up with AI philosopher Jan Romportl who conducted the “interview” and what we got was a fascinating discussion.

They discussed topics such as singularity, morality and ethics, as well as the development of intelligent life, viewing the universe as a search process directed by physics which was optimized by evolution, and then by intelligence. Some of the questions they tackle include:

– How can we convince AI to share a purpose with us?

– How did Marvin Minsky cause such a rift in the AI field?

– Will global warming wipe us out before we create human-level AI?

– What can we learn about the nature of emotions from the science of AI?

We hope you enjoy the discussion as much as we did!

Joscha Bach is a Cognitive Scientist at Harvard Program for Evolutionary Dynamics. He is interested in finding out how the mind works, and has an intensely curious about many domains, including physics, technology, politics and macroeconomics.

Jan Romportl is Chief Data Scientist at O2 Czech Republic as well as Chief Science Officer at the AI Startup Incubator. His current research interests are in in the fields of anthropocene and safety of artificial intelligence.

Read the full article here – https://clickdotme.com/click-life/the-ethics-of-artificial-intelligence/

Music: Cause and Effect (no lead) 03

http://www.winterintelligence.org/ Winter Intelligence 2012 Oxford University

Video thanks to Adam Ford, http://www.youtube.com/user/TheRationalFuture

Extended Abstract: The gradually increasing sophistication of semi-autonomous and autonomous robots and virtual agents has led some scholars to propose constraining these systems’ behaviors with programmed ethical principles (“machine ethics”). While impressive machine ethics theories and prototypes have been developed for narrow domains, several factors will likely prevent machine ethics from ensuring positive outcomes from advanced, cross-domain autonomous systems. This paper critically reviews existing approaches to machine ethics in general and Friendly AI in particular (an approach to constraining the actions of future self-improving AI systems favored by the Singularity Institute for Artificial Intelligence), finding that while such approaches may be useful for guiding the behavior of some semi-autonomous and autonomous systems in some contexts, these projects cannot succeed in guaranteeing ethical behavior and may introduce new risks inadvertently. Moreover, while some incarnation of machine ethics may be necessary for ensuring positive social outcomes from artificial intelligence and robotics, it will not be sufficient, since other social and technical measures will also be critically important for realizing positive outcomes from these emerging technologies.
Building an ethical autonomous machine requires a decision on the part of the system designer as to which ethical framework to implement. Unfortunately, there are currently no fully-articulated moral theories that can plausibly be realized in an autonomous system, in part because the moral intuitions that ethicists attempt to systematize are not, in fact, consistent across all domains. Unified ethical theories are all either too vague to be computationally tractable or vulnerable to compelling counter-examples, or both. [1,2] Recent neuroscience research suggests that, depending on the context of a given decision, we rely to varying extents on an intuitive, roughly deontological (means-based) moral system and on a more reflective, roughly consequentialist (ends-based) moral system, which in part explains the aforementioned tensions in moral philosophy. [3] While the normative significance of conflicting moral intuitions can be disputed, these findings at least have implications for the viability of building a machine whose moral system would be acceptable to most humans across all domains, particularly given the need for ensuring the internal consistency of a system’s programming. Should an unanticipated situation arise, or if the system were used outside its prescribed domain, negative consequences will likely result due to the inherent fragility of rule-based systems.
Moreover, the complex and uncertain relationship between actions and consequences in the world means that an autonomous system (or, indeed, a human) with an ethical framework that is (at least partially) consequentialist cannot be relied upon with full confidence in any non-trivial task domain, suggesting the practical need for context-appropriate heuristics and great caution in ensuring that moral decision-making in society does not become overly centralized.[4] The intrinsic complexity and uncertainty of the world, along with other constraints such as the inability to gather the necessary data, also doom approaches (such as Friendly AI) to derive a system’s utility function from extrapolation of humans’ preferences. There is also a risk that the logical implications derived from premises in a given ethical system may not be what humans working on machine ethics principles believe them to be (this is one of the categories of machine ethics risks highlighted in Isaac Asimov’s work[5]). In other words, machine ethicists are caught in a double-bind: they must either depend on rigid principles for addressing particular ethical issues, and thus risk catastrophic outcomes when those rules should in fact be broken[6], or they allow an autonomous system to reason from first principles or derive its utility function in an evolutionary fashion, and thereby risk the possibility that it will arrive at conclusions that the designer would not have initially consented to. Lastly, even breakthroughs in normative ethics would not ensure positive outcomes from the deployment of explicitly ethical autonomous systems. Several factors besides machine ethics proper — such as ensuring that autonomous systems are robust against hacking, developing appropriate social norms and policies for ensuring ethical behavior by those involved in developing and using autonomous systems, and the systemic risks that could be arise from dependence on ubiquitous intelligent machines — are briefly described and suggested as areas for further research in light of the intrinsic limitations of machine ethics.

Panelists: Ben Goertzel, Marcus Hutter, Joscha Bach, Peter Cheeseman
This panel was held at the ‘Artificial Intelligence / Human Possibilities’ event as adjunct to the AGI17 conference in Melbourne 2017.

Assessing emerging risks and opportunities in machine cognition

With AI Experts Ben Goertzel, Marcus Hutter, Peter Cheeseman and Joscha Bach.

Event Focus:
Given significant developments in Artificial Intelligence, it’s worth asking: What aspects of ideal AI have not been achieved yet?
There is good reason for the growing media storm around AI – many experts agree on the big picture that with the development of Superintelligent AI (including Artificial General Intelligence) humanity will face great challenges (some polls suggest that AGI is not far). Though in order to best manage both the opportunities and risks we need to achieve a clearer picture – this requires sensitivity to ambiguity, precision of expression and attention to theoretical detail in understanding the implications of AI, communicating/discussing AI, and ultimately engineering beneficial AI.

Meetup details: https://www.meetup.com/Science-Technology-and-the-Future/events/242163071/

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
– Science, Technology & the Future

Ethics for artificial intelligence – not so straightforward.
Click to subscribe! ► http://bit.ly/Scopes_Sub

Full agenda with time tags below. Now available on your favorite podcast platform and at:

https://EEsTalkTech.com (an electrical engineering podcast)

Hosted by Mike Hoffman and Daniel Bogdanoff (@Keysight_Daniel), EEs Talk Tech is a bi-monthly engineering podcast dedicated to discussing technology and tech news from an engineer’s perspective. Guest Brig Asay.

New episodes available on the 2nd and 4th week of every month.

EEs Talk Tech Podcast Blog:
http://bit.ly/EEsTalkTech

Check out our blog:
http://bit.ly/ScopesBlog

Like our Facebook page:
https://www.facebook.com/keysightbench/

Learn more about using oscilloscopes:
http://oscilloscopelearningcenter.com

More about Keysight oscilloscopes:
http://bit.ly/SCOPES

The 2-Minute Guru Season 2 playlist:
https://www.youtube.com/playlist?list=PLzHyxysSubUlqBguuVZCeNn47GSK8rcso

The 2-Minute Guru Season 1 playlist:
https://www.youtube.com/playlist?list=PLzHyxysSubUkc5nurngzgkd2ZxJsHdJAb

Discussion Overview:

AI Ethics 01:25
Restaurant Reviews by AI 01:41
Self-driving (autonomous) cars 02:07
AI ethical dilemma 02:31
AI decision liability 03:10
Consumer liability 06:00
AI decision-making without human interaction 07:25

Three stages of AI 07:48
Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Super Intelligence (ASI) 07:48
AI consciousness 08:10
AI standards 08:30
Self-replicating AI 09:10
Humanoid robots 09:45
Should AI be able to replicate itself? 10:16
Task-based AI (computers) 11:25

Programming AI to have morals and ethics 11:40
Prisoner’s dilemma 13:01

Marketing self-driving cars 13:36

Autonomous busses emulating human behavior 14:35
Should AI be bound to local laws and regulations? 17:50
Telemetry tracking autonomous vehicles for speed monitoring 20:08

Self-programmable FPGAs and neural networks 23:33

Can a computer be evil? 26:30

The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals.

Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance – all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.

Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society’s divides, such as the digital divide or income inequality?

This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields:

Keynotes:

– Prof. Moshe Vardi: “Deep Learning and the Crisis of Trust in Computing”
– Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI”

Panelists: Ethics and Bias in AI

– Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University
– Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien
– Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna
– Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna
– Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG

Moderator: Markus Mooslechner, Terra Mater Factual Studios

The evening will be complemented by networking & discussions over snacks and drinks.

More details: http://www.aiethics.cisvienna.com

Citations

Dean, Jeff. “How Will Artificial Intelligence Affect Your Life | Jeff Dean | TEDxLA.” YouTube, YouTube, 18 Jan. 2017, www.youtube.com/watch?v=BfDQNrVphLQ.

Rawls, John. A THEORY OF JUSTICE – Economics. www2.econ.iastate.edu/classes/econ362/hallam/Readings/Rawl_Justice.pdf.

Smith, Adam. “Selections from The Theory of Moral Sentiments.” Econlib.org, www.econlib.org/library/Smith/smMS1.html.

Exclusive interview with Stuart Russell. He discusses the importance of achieving friendly AI – Strong AI that is provably (probably approximately) beneficial.

Points of discussion:
A clash of intuitions about the beneficiality of Strong Artificial Intelligence
– A clash of intuitions: Alan Turing raised the concern that if we were to build an AI smarter than we are, we might not be happy about the results. While there is a general notion amoungst AI developers etc that building smarter than human AI would be good.
– But it’s not clear why the objectives of Superintelligent AI will be inimicable to our values – so we need to solve what some poeple call the value alignment problem.
– we as humans learn values in conjunction with learning about the world

The Value Alignment problem

Basic AI Drives: Any objective generates sub-goals

– Designing an AI not want to disable it’s off switch
– 2 principles
– 1) its only objective is to maximise your reward function (this is not an objective programmed into the machine but is a kind of (non-observed) latent variable
– 2) the machine has to be explicitly uncertain about what that objective is
– if the robot thinks it knows what your objective functions are, then it won’t believe that it will make you unhappy and therefore has an incentive to disable the off switch
– the robot will only want to be switched off if thinks it will makes you unhappy

– How will the machines do what humans want if they can’t see their objective functions?
– one answer is to allow the machines to observe human behaviour, and interpret that behaviour as providing evidence of an underlying preference structure – inverse reinforcement learning

Aggregated Volition: How does an AI optimise for many peoples values?
– Has the benefit of symmetry
– difficulties in commensurbaility of different human preferences
– Problem: If someone feels more strongly about a value X should they get more of a share of value X?

How to deal with people who’s preferences include the suffering of others?

Should a robot be more obligated to its owner than to the rest of the world?
– should this have something to do with how much you pay for the robot?

Moral philosophy will be a key industry sector

Issues of near term Narrow AI vs future Strong AI
– Very easy to confuse the near term killer robot question with the existential risk question

Differences in the issues with the risk of the misuse of Narrow AI and the risk of Strong AI
– Weaponised Narrow AI

Should we replace the gainful employment of humans with AI?

A future where humans lose a sense of meaning & dignity

Hostility to the idea of Superintelligence and AI Friendline
– there seems to be something else going on for AI experts to make rational arguments as simple minded as ‘If the AI goes bad, just turn the AI off’
– beating alphago is no problem – we just need to play better moves
– it’s theoretically possibe that AI could pose existential risk – but it’s also possible that a black hole could appear in near earth orbit – we don’t spend any time worrying about that so why should we spend time worrying about the existential risk of AI?

Defensive psychological reactions to feeling one’s research is under attack
– People proposing AI safety are not anti AI any more than people wanting to contain a nuclear reaction are anti physics

Provably beneficial AI
– where the AI systems responsibility is to figure out what you want
– though the data to train the AI may be sometimes unrepresentative – leading to a small prossibility of deviation from true beneficiality – probably approximately beneficial AI

Convincing the AI community that AI friendliness is important

Will there be a hard takeoff to superintelligence?

What are the benefits of building String AI?

Center for Human-Compatible AI – UC Berkley
http://humancompatible.ai/

Stuart Jonathan Russell is a computer scientist known for his contributions to artificial intelligence. He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco.
https://en.wikipedia.org/wiki/Stuart_J._Russell

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
– Science, Technology & the Future

AI has been with us for hundreds of years; there’s no “singularity” step change. Joanna Bryson explains that the main threat of AI is not that it will do anything to us but what we are already doing to each other with it—predicting and manipulating our own and others’ behavior.

Subscribe to O’Reilly on YouTube: http://goo.gl/n3QSYi

Follow O’Reilly on:
Twitter: http://twitter.com/oreillymedia
Facebook: http://facebook.com/OReilly
Instagram: https://www.instagram.com/oreillymedia
LinkedIn: https://www.linkedin.com/company-beta/8459/

Toby discusses his early dreams of building thinking machines inspired by science fiction – and covers AI Ethics and current to near term applicability in intelligent systems.

Toby Walsh is a leading researcher in Artificial Intelligence. He was recently named in the inaugural Knowledge Nation 100, the one hundred “rock stars” of Australia’s digital revolution. He is Guest Professor at TU Berlin, Scientia Professor of Artificial Intelligence at UNSW and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research. He has been elected a fellow of the Australian Academy of Science, and has won the prestigious Humboldt research award as well as the 2016 NSW Premier’s Prize for Excellence in Engineering and ICT. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden.

He regularly appears in the media talking about the impact of AI and robotics. In the last year, he has appeared in TV and the radio on the ABC, BBC, Channel 7, Channel 9, Channel 10, CCTV, DW, NPR, RT, SBS, and VOA, as well as on numerous local radio stations. He also writes frequently for print and online media. His work has appeared in the New Scientist, American Scientist, Le Scienze, Cosmos and The Best Writing in Mathematics (Princeton University Press). His twitter account has been voted one of the top ten to follow to keep abreast of developments in AI. He often gives talks at public and trade events like CeBIT, the World Knowledge Forum, TEDx, The Next Big Thing Summit, and PauseFest. He has played a key role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka “killer robots”).

Guest Professor, TU Berlin
Scientia Professor of AI, UNSW Sydney
Group Leader, Data61

Fellow of Australian Academy of Science
Fellow of Association for Advancement of AI
Fellow of European Association for AI

https://www.cse.unsw.edu.au/~tw/

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
b) Donating via Patreon: https://www.patreon.com/scifuture and/or
c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
– Science, Technology & the Future

Professor Walsh is a “rock star” of Australia’s digital revolution and a leading researcher in Artificial Intelligence.

‘Bots behaving badly: In his TEDxBlighStreet talk, Toby speaks about ‘good old-fashioned bad behaviour’ in a thoroughly modern context, and the ethical implications of Artificial Intelligence.

He is Scientia Professor of Artificial Intelligence at UNSW, leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research, and is Guest Professor at TU Berlin. He has been elected a fellow of the Australian Academy of Science, and has won the prestigious Humboldt research award as well as the NSW Premier’s Prize for Excellence in Engineering and ICT.
Toby Walsh is a leading researcher in Artificial Intelligence. He was named by the Australian newspaper as a “rock star” of Australia’s digital revolution. He is Scientia Professor of Artificial Intelligence at UNSW, leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research, and is Guest Professor at TU Berlin. He has been elected a fellow of the Australian Academy of Science, and has won the prestigious Humboldt research award as well as the NSW Premier’s Prize for Excellence in Engineering and ICT. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Andra Keay is the Managing Director of Silicon Valley Robotics, an industry group supporting the innovation and commercialization of robotics technologies. She is also founder of Robot Launchpad for startups and co-founder of Robot Garden, a new robotics hackerspace.

On Wednesday December 6th, two teams of UTS academics and industry partners gathered at UTS for the hotly anticipated “Humans, Data, AI & Ethics – Great Debate”. The rhetorical battle raised the provocative proposition that:

“Humans have blown it: it’s time to turn the planet over to the machines”
The debate was preceded by our daytime Conversation which featured engaging panel discussions and Lightning Talks from UTS academics and partners in government and industry.

The debate took place in front of a large audience of colleagues and members of the public on the UTS Broadway campus. The Affirmative team (The Machines) argued that a productive relationship between humans and machines will help us to build a fairer, more efficient and more ecologically sustainable global society. Numerous examples of humanity’s gross dysfunction in governance and management were raised, from human-induced climate change to widening inequality and the recent election of unpredictable populist leaders. The team argued that finely (and ethically) tuned machines will help humans to solve these immense social and environmental challenges and maintain standards of equality, fairness and sustainability.

The Negative team (The Humans) cautioned against the rapid adoption of these hypothetical “ethical machines”, raising concerns about existing human prejudices and biases being built into AI. The team envisaged a dystopian world in which machines deny the possibility of human creativity, error or “happy accidents”, which have lead to so many important moments of discovery throughout history. According to the Negative, there are also numerous social services which as yet cannot be performed by AI. Healthcare provision for example, strongly depends on complex emotional intelligence, human tact and an ability to empathise and build rapport.

Ultimately, the Negative were adjudicated as the winner of the debate, to the relief of humanists and ethicists in attendance. The theatrical and good-humoured event was a rousing success, giving leading thinkers in the data science field an opportunity to flesh out challenging ideas surrounding data, AI, society and ethics in a responsive public forum.

Humans, Data, AI & Ethics – The Great Debate

Day 2 Session 1: Artificial Intelligence & Human Values

:00 – David Chalmers Opening Remarks
3:30 – Stuart Russell “Provably Beneficial AI”
37:00 – Eliezer Yudkowsky “Difficulties of AGI Alignment”
1:07:03 – Meia Chita-Tegmark and Max Tegmark “What We Should Want: Physics and Psychology Perspectives”
1:39:30 – Wendell Wallach “Moral Machines: From Machine Ethics to Value Alignment”
2:11:35 – Steve Petersen “Superintelligence as Superethical”
2:39:00 – Speaker panel

More info: https://wp.nyu.edu/consciousness/ethics-of-artificial-intelligence/

On October 14-15, 2016, the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics hosted a conference on “The Ethics of Artificial Intelligence”.

Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves.

This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including:

What ethical principles should AI researchers follow?
Are there restrictions on the ethical use of AI?
What is the best way to design AI that aligns with human values?
Is it possible or desirable to build moral principles into AI systems?
When AI systems cause benefits or harm, who is morally responsible?
Are AI systems themselves potential objects of moral concern?
What moral framework and value system is best used to assess the impact of AI?

In today’s ever-changing and growing world, artificial intelligence is quickly becoming more integrated within our everyday lives. What happens when we give total control to sentient machines? Leah Avakian discusses the possibilities of such a scenario, and how apps as simple as Siri could have dangerous real-world consequences. N/A This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Artificial intelligence: how can a computer learn to perceive our reality. What are the
technological challenges of the future? This technological revolution will bring, without a
doubt, benefits and advantages, but it will bring also moral and social problems and we still
have to find solutions. Marco Cotrufo, pugliese e dal 2015 torinese di adozione. Attualmente si occupa di formazione e selezione del personale, sviluppo di soluzioni software per aziende in Europa e tutto ciò che riguarda le nuove tecnologie nel mondo del software. Dal 2009 al 2013 si è occupato dello sviluppo di piattaforme software per la pubblica istruzione, in collaborazione con il MIUR, in particolare per la realizzazione di strumenti per l’insegnamento a distanza e di sistemi per l’accessibilità scolastica per alunni con difficoltà. Negli ultimi tempi si è interessato al tema dell’intelligenza artificiale e degi aspetti morali e sociali legati ad essa. Collabora in alcuni progetti opensource italiani e partecipa a gruppi informali legati al mondo della tecnologia. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx