Cognitive Agents – Episode 1: AI vs. AGI vs. ASI – From Smart Tools to Superintelligence
In this episode, we explore the world of AI—its current applications, practical shortcuts, and real-world impact. Then we dive into the differences between AI and AGI, discuss how AGI could be applied, and examine the potential risks it poses. Finally, we take a look at ASI and consider whether superintelligent systems could one day become reality.
00:00 - Introduction
02:10 - Are humans intelligent?
03:54 - Is AI intelligent?
06:06 - What is AI?
07:34 - What are the different types of AI?
09:34 - What is autoregression?
11:45 - Are autopilot care systems safe?
13:43 - Predictive AI systems.
14:49 - Are Chess and Go AI systems predictive?
17:06 - The failure of AI systems in emotional responses.
18:10 - Large Language Models (LLMs) – ChatGPT and its counterparts.
22:39 - How does ChatGPT or any LLM system answer the question: Who is Einstein?
28:00 - Why there is cliché in the answers of LLMs like ChatGPT.
30:27 - Can Large Language Models (LLMs) truly understand the world in any meaningful way?
32:29 - What is the training phase of LLMs?
34:45 - On a scale of 1 to 10, how intelligent is AI?
35:59 - What is Artificial General Intelligence (AGI)?
38:21 - Would AGI think like a human?
42:39 - What are the purposes of AI and AGI?
44:32 - What is the difference between the training phase of AGI and AI?
47:13 - What are the potential applications of AGI?
50:03 - The misuse of AI
53:07 - What is ASI (Artificial Superintelligence)?
57:30 - Could AGI prevent humans from shutting it down?
58:54 - Could AGI come to regret the invention of Artificial Superintelligence (ASI)?
Hosts:
Mudar Adas is a PhD researcher in the Neuro-Cognitive Modelling Lab at the University of Tübingen. His research focuses on understanding how intelligent agents—both consciously and unconsciously—protect themselves in daily interactions and on developing cognitive models to predict and simulate these behaviors more effectively. Beyond his scientific work, Mudar is also an accomplished novelist, having written and published three books. His dual passion for science and storytelling allows him to bring a unique and engaging perspective to the podcast.
Prof. Martin V. Butz is the head of the Neuro-Cognitive Modelling Lab at the University of Tübingen. A renowned cognitive scientist, Martin’s research centers on predictive brain mechanisms, embodied intelligence, and neural-cognitive modeling. He is widely recognized for his work on how brains learn to anticipate, plan, and interact with the environment. Martin has authored numerous publications in the field and has a deep interest in bridging the gap between theoretical understanding and practical applications of intelligence. His expertise in AI, combined with his fascination for human cognition, makes him an ideal co-host for this intellectually stimulating podcast.
Produced by:
University of Tübingen
Faculty of Science
Neuro-Cognitive Modeling Lab
© 2025
#ArtificialIntelligence #AGI #ASI #MachineLearning #Superintelligence #FutureOfAI #AIApplications #AIEthics #AIResearch #AIvsAGI #CognitiveScience #Neuroscience #MindAndMachine #Cognition #HumanMind #CognitiveNeuroscience #PhilosophyOfMind #ConsciousnessStudies #UniversityOfTuebingen #TuebingenUniversity #UniTuebingen #Tuebingen #AIResearchTuebingen #TuebingenScience #GermanUniversities #ResearchInGermany #AI
In this episode, we explore the world of AI—its current applications, practical shortcuts, and real-world impact. Then we dive into the differences between AI and AGI, discuss how AGI could be applied, and examine the potential risks it poses. Finally, we take a look at ASI and consider whether superintelligent systems could one day become reality.
00:00 – Introduction
02:10 – Are humans intelligent?
03:54 – Is AI intelligent?
06:06 – What is AI?
07:34 – What are the different types of AI?
09:34 – What is autoregression?
11:45 – Are autopilot care systems safe?
13:43 – Predictive AI systems.
14:49 – Are Chess and Go AI systems predictive?
17:06 – The failure of AI systems in emotional responses.
18:10 – Large Language Models (LLMs) – ChatGPT and its counterparts.
22:39 – How does ChatGPT or any LLM system answer the question: Who is Einstein?
28:00 – Why there is cliché in the answers of LLMs like ChatGPT.
30:27 – Can Large Language Models (LLMs) truly understand the world in any meaningful way?
32:29 – What is the training phase of LLMs?
34:45 – On a scale of 1 to 10, how intelligent is AI?
35:59 – What is Artificial General Intelligence (AGI)?
38:21 – Would AGI think like a human?
42:39 – What are the purposes of AI and AGI?
44:32 – What is the difference between the training phase of AGI and AI?
47:13 – What are the potential applications of AGI?
50:03 – The misuse of AI
53:07 – What is ASI (Artificial Superintelligence)?
57:30 – Could AGI prevent humans from shutting it down?
58:54 – Could AGI come to regret the invention of Artificial Superintelligence (ASI)?
Hosts:
Mudar Adas is a PhD researcher in the Neuro-Cognitive Modelling Lab at the University of Tübingen. His research focuses on understanding how intelligent agents—both consciously and unconsciously—protect themselves in daily interactions and on developing cognitive models to predict and simulate these behaviors more effectively. Beyond his scientific work, Mudar is also an accomplished novelist, having written and published three books. His dual passion for science and storytelling allows him to bring a unique and engaging perspective to the podcast.
Prof. Martin V. Butz is the head of the Neuro-Cognitive Modelling Lab at the University of Tübingen. A renowned cognitive scientist, Martin’s research centers on predictive brain mechanisms, embodied intelligence, and neural-cognitive modeling. He is widely recognized for his work on how brains learn to anticipate, plan, and interact with the environment. Martin has authored numerous publications in the field and has a deep interest in bridging the gap between theoretical understanding and practical applications of intelligence. His expertise in AI, combined with his fascination for human cognition, makes him an ideal co-host for this intellectually stimulating podcast.
Produced by:
University of Tübingen
Faculty of Science
Neuro-Cognitive Modeling Lab
© 2025
#ArtificialIntelligence #AGI #ASI #MachineLearning #Superintelligence #FutureOfAI #AIApplications #AIEthics #AIResearch #AIvsAGI #CognitiveScience #Neuroscience #MindAndMachine #Cognition #HumanMind #CognitiveNeuroscience #PhilosophyOfMind #ConsciousnessStudies #UniversityOfTuebingen #TuebingenUniversity #UniTuebingen #Tuebingen #AIResearchTuebingen #TuebingenScience #GermanUniversities #ResearchInGermany #AI