Share it with your friends Like

Thanks! Share it with your friends!

Close

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up with him. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Buy/Stream:

Comments

Green Department says:

But AGI is impossible. A system can not understand itself, W. Edwards Deming, and Godel's incompleteness theorems. Human can not understand human, let alone make an equivalent AGI.

fermigas says:

"nativist ideas are more his (Chomsky's) ideas than anyone's save Plato or Kant." A great way to sum it up. It would have surprised me in college (in the 80s) that ideas I learned in linguistics class had more staying power than many I learned in my upper division astronomy courses, never mind most of what I learned about CSE over the next decade or two.

Chandra Shekhar says:

1.22.00: Thanks for asking India question and thanks for asking timelines…………India is a varied country with 29 states every state is very different……if it drives in India it will able to drive everywhere in world………………reinforcement learning can help…………..or recursive learning in India will help

Chandra Shekhar says:

THE GUY WAS GOOD 1.5 HOURS WAS QUITE LESS FOR HIM…………..

Chandra Shekhar says:

1.07.00: The timelines 30-50 years thanks Niokola……….

Chandra Shekhar says:

53.00: Lot of things are coming together in terms of Internet………..it is now organic……..

Chandra Shekhar says:

43.15: Intelligence is a multidimensional variable…………….beautiful……………..PUNCHLINE…………but slowly we are conquering each variable is my take and soon each variable is getting exponential……….

Chandra Shekhar says:

42.21: Timelines question thanks again

Chandra Shekhar says:

29.23: Thanks for asking timelines of Singularity

Chandra Shekhar says:

22.54: Not able to understand exponentials

Chandra Shekhar says:

9.00: Overhyped only a function of time………….as the time passes the over-hype is becomes reality

Chandra Shekhar says:

5.00: 2012 he though it will happen in 2025 but it happened in 2015…………..the world is exponential……………………

Skeptoptimist says:

Its great that we have people like Gary who will play devils advocate, and ground the many exagerated views of AI out there. However, in regards to neural network alternatives: The brain consists of large vectors of neural activity; there are no symbols/symbolic expressions in the brain. So if we want to create a system inspired by the brain, neural networks must be the way to proceed. Its still early days in this field, there are plenty of new exciting NN ideas that have yet to be explored, and anybody claiming that they understand the emerging properties of deep learning/neural networks, might as well say that they fully understand quantum mechanics.

Walter White says:

I moderator seems to be a bit of a dark green posthuman, he agreed with the gust to much and only challenged when he got opitmistic about AI lol

Vaclav says:

Just realized I don't really like philosophers/sophists, psychologists and sociologists. So, being obviously biased, I haven't heard even one original point of view or a single practical suggestion leading to a solution of some problem. Just plenty of words. Yes, I know, he's an academic and a writer, so it's inevitable but Nikola pls challenge your guests even if you agree with them. Anyway thank you.

About AGI, it would be quite a deal to actually get one but we don't need any. We have billions of GI units and they suck pretty much in everything, yes even in driving on US highways (or languages, liek myself). It's called a human (And I wish you to see a truck drivers from Poland before you start to citicize Tesla's autopilot.) . We need specialized AIs and brain-computer interface bcs ppl are just pathetic apes. With one brain, two hands and short lives it's really hard to… well, touch the stars :).

Will B says:

Hi Gary. I want to disagree. First I believe the game of Go is more teathered to real world problems than most people give it credit for. And Second, that the major advance might have turned out to be a method for spotting, abstracting, and generalizing the most creative moves and strategies from HUMAN players, but once those lessons are learned they become baked in, and in a sense learning to learn is its own achievement that wont go away. I think we are generally in agreement, though. Are You Hiring? 🙂

Pls.Protect.Free.speech Unsub.chans.del.comments says:

Following AI videos for about 6 months now and thought that the move towards AI might be better built around words and language. We humans only understand the world if we label everything first since childhood. A computer might see difference between dog and cat but what about walking dog vs running cat. This requires language and words to develop concepts rather than building understanding of the around visual information. Words actually go deeper than visuals… Such as a novel vs a comic. Visual understanding is limited but language delves and explores many angles that visuals alone do not. Which would suggest that advanced chat box programs might actually be the path to general AI. Sofia and such robots might actually be taking us to agi but at the moment are laughed off as just chat boxes with plastic faces.

Warwick Dillon says:

Yea that because females have organic air bag Protection.

Carlos Perez says:

I had hoped to find some nuggets of wisdom here. However, if the estimate is 30-50 year and the argument of the book is that we are on the wrong track, then what's the short term take away here other than nothing is happening until general AI is available 3 decades from now. The main point of current A.I. today is that it works well in narrow domains. Language translation is not perfect but it works very well. Tesla auto-pilot works very well on highways and good weather. Google assistant and Alexa work well for human independent speech recognition. Text to speech isn't as painful to listen. To claim that there is zero progress since 2012 is selectively being ignorant of the knowledge that exists.

optimaRatio says:

Thank you Nikola! I am really enjoying your recent interviews. Keep up the good work.

bolsh smith says:

A better form of artificial intelligence will be to just use what we have now and apply it to us and fuse AI to us. Use AI to enhance us so that we can understand biology as easily as anyting. Then we can make ourselves anything we want

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×