The “AI Forest” is an artificial indoor forest and interactive game environment that was first presented at the Ars Electronica Festival 2021 in Linz, Austria. The task of visitors was to find ten haptic mushroom models in the bushes and scan them using a tablet. The virtual basket had to be filled with as many different edible species as possible. An artificially intelligent mushroom identification app, which had been trained with a large number of photos of forest mushrooms, assisted the players with advice. The purpose of the app was to identify mushroom species and classify them as edible or poisonous. In addition to its playful character, the AI Forest represented an innovative research environment in which questions about decision-making in teamwork with AI and about explainable artificial intelligence methods were investigated. This video gives an overview of the installation, our scientific user study on human decision-making with AI assistance, and it gives impressions of the festival. The idea for the AI Forest originated in the research project “HOXAI – Hands-on Explainable AI”, a collaboration between the LIT Robopsychology Lab and the Visual Data Science Lab at JKU Linz. For more details on our study, read our preprint: Citation: Leichtmann, B., Hinterreiter, A., Humer, C., Streit, M., & Mara, M. (2022, September 21). Explainable Artificial Intelligence improves human decision-making: Results from a mushroom picking experiment at a public art festival. Retrieved from If you talk about our research in your work, please make sure to give appropriate credit [More]
Jean-Francois Bonnefon, Toulouse School of Economics, held a keynote, “The Moral Machine Experiment”, at IJCAI-ECAI 2018, the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, the premier international gathering of researchers in AI.
Prof. Edmond Awad (Institute for Data Science and Artificial Intelligence at the University of Exeter) Abstract:  I describe the Moral Machine, an internet-based serious game exploring the many-dimensional ethical dilemmas faced by autonomous vehicles. The game enabled us to gather 40 million decisions from 3 million people in 200 countries/territories. I report the various preferences estimated from this data, and document interpersonal differences in the strength of these preferences. I also report cross-cultural ethical variation and uncover major clusters of countries exhibiting substantial differences along key moral preferences. These differences correlate with modern institutions, but also with deep cultural traits. I discuss how these three layers of preferences can help progress toward global, harmonious, and socially acceptable principles for machine ethics. Finally, I describe other follow up work that build on this project. Bio: Edmond Awad is a Lecturer (Assistant Professor) in the Department of Economics and the Institute for Data Science and Artificial Intelligence at the University of Exeter. He is also an Associate Research Scientist at the Max Planck Institute for Human Development, and is a Founding Editorial Board member of the AI and Ethics Journal, published by Springer. Before joining the University of Exeter, Edmond was a Postdoctoral Associate at MIT Media Lab (2017-2019). In 2016, Edmond led the design and development of Moral Machine,  a website that gathers human decisions on moral dilemmas faced by driverless cars. The website has been visited by over 4 million users, who contributed their judgements on 70 million dilemmas. Another [More]