Share it with your friends Like

Thanks! Share it with your friends!

Close

Debate between Facebook’s head of AI, Yann LeCun and Prof. Gary Marcus at New York University.

The debate was moderated by Prof. David Chalmers.

Recorded: Oct 5th, 2017

Buy/Stream:

Comments

Dylan Evans says:

Yan leCun's lack of philosophical sophistication shows up again at 1:52:32 when he doesn't seem to have even heard of inference to the best explanation

Dylan Evans says:

At 1:23:40 Yan leCun fails to understand Ned's question – or at least fails to give a relevant reply

Lily Vodka says:

What I find AI troubling is that it actually automate a lot of learning processes in human beings. That will make a lot of scientists especially psychologists who are currently mainly using statistical measures equivalent to machines in the future

Dylan Cope says:

I like how they both used Hinton to back up their arguments! 😛

Youtube Adventurer says:

Deep learning already requires a huge amount of innate machinery. It requires machinery to label all of the training data. If you just give it raw data, it won't learn anything.

Zrebbesh says:

I don't think human vision actually does translational invariance in the way that convolutional networks implement. Our perception of translational invariance is mostly time-dependent rather than space-dependent, in that we recognize things at the center of our visual fields while our eyes are pointing straight at them. If you ask a human whether two things are identical, the human will look at them both in sequence, rather than comparing different areas of the visual field.

Steve Lenores says:

A little over a century ago humans achieved another impossible achievement, the ability to fly. These pioneers achieved the goal by borrowing from nature how birds flew. Machine conscientiousness will be achieved the same way, by borrowing how the human mind achieves it. It won't look pretty at first, but it will get the job done. We can make improvements later. We now fly higher and faster than birds by hundreds of factors. I'm sure machine conscientiousness will follow a similar pattern. Remember that those who say something is impossible or will only be achieved well beyond our life time, are frequently proved wrong fairly quickly.

MyOther Soul says:

50:00 How many algorithms does the brain implement? No brain algorithms have been found. The brain doesn't run like a computer, it doesn't have distinct operations, procedures and data.

How does the brain handle uncertainty in prediction? Ever felt uncertain about what was going to happen next? There you go, that's the brain (aka you) handling uncertainty. Now if we could only get a machine to handle uncertainty like that.

Is there some underlying principle to intelligence like there is to flight? Rockets fly but they don't fly like planes fly. Bullets fly but there flight is different from rockets and planes. Is there an underlying principle to flight? Is there an underlying principle to intelligence? No one knows. A good scientist wouldn't make such an assumption without some means of determining its how warranted it is.

Common sense is filling in the blanks. The problem with a lot of AI discussions is the terms are squishy of they are defined one way and the extrapolated to the rest world under another definition. For example I just say a paper on representations in imagine recognition networks. In the paper a "representation" was a activation associated with a feature such as and edge or corner. That's a rather particular use of the word "representation" and a very different meaning than what most people mean when they say "representation".

James Connolly says:

They are talking past each other. Machine learning is a metaphorical term. In the end, all machine learning is a kind of error correction procedure to a desired outcome using programmed primitives. Can we say that about human learning? What can we say about human learning? That it corrects for errors? Does it? Sometimes it does, sometimes it doesn't. Do little girls playing with dolls know the doll isn't real? What does that even mean? Is it an error to be corrected? But the doll is real in some sense isn't it? How then do you define a "real" error that needs to be corrected? Does human learning have an outcome based purpose? Does it? Who knows. What we know is that humans and animals just "know" things from the start, without needing input. They don't need corrections nor are errors easily classified. Seems to be that the richness of understanding things is just there: simply put, we are born with incredibly rich knowledge.

Machine learning and human learning only relate in the sense that they both need innate structure. But how much one can define the other seems dubious at best.

Harsh Dhillon says:

Gray Marcus made this debate about winning and losing. Whereas in Science we don't think about these trivial matters. What matters to scientists is taking the technologies forwards and make a better future for the rest of us. LeCun has done the exact very well.

Bogiedar Georges says:

these debate is obsolete made between HUMANS with limits by mother nature brain"s neurons speed- how about the same DEBATE holds between TWO quantum INTELLIGENT SUBJECTS programmed in near future with say modest 1.000 -10,000 times speed of firing faster than human neurons , compared with exponentially developing quantum reality…

Vladimir Mesherin says:

"What is missing" whats missing is – adrenal glands, thyroid, testosterone glands and thousands of other glands humans know nothing about, but which also participate in production of human intelligence and which you will never be able to build – that is what is missing and also being honest is missing

Reirainsong says:

Nothing is -innate-. Everything is emergent, the question is as to just how many iterations of emergence are necessary. Anything capable of rudimentary logical operations can form the first layer of a network that grants shortcuts and heuristics to subsequent ones. The importance of intrinsic design of the human cortex and "innate" functionality derived from therein is overstated, as it shows inherent adaptability; it's not an ASIC that is locked in its innate functions. The computer processor, on the other hand, can't rebuild itself but, being a Turing universal machine, there is no hardcoded adaptation that it can't emulate via software (even if it may very well mean that a computer with functionality similar to brain might need to be several times more powerful because it has to emulate a low level process).

I sympatheticize with the inherent-ist point of view about "garbage in, garbage out", and may very well agree that a core, concise algorithm – the holy grail of intelligence – is possible, but I also believe it's futile to work on it directly. We don't know it and might never be able to discover it faster than machine iteration. We've already passed the point where humans can grant the AI pre-digested heuristics faster than it can teach itself. The only things we can do is ensure proper feedback fulfilling the role that evolutionary pressure played in developing organic intelligence: the ability to accurately identify flawed conclusions that do not correspond with actual observations, discard the logic that has lead to them, and iterate on something better until the model matches the facts. Inherentists might say, "But how does the machine know what facts even are?", but this point of view held merit right until there were reasonably competent algorithms capable of any kind of fuzzy logic at all and accurate sensors to provide them with data (i.e. we did nail reasonably accurate image identification). They might have different error margins, but any of these can be a foundation for higher learning. A worse verification algorithm will simply require more iterations, but it will reach the same conclusions eventually.

Rebel Science says:

For the record. Everything in intelligence, from sensory perception to reasoning, adaptation and motor behavior, is based on timing. The future of AGI research is in spiking neural networks. That’s where the AI money will be.

Rebel Science says:

Great debate. Note that it is impossible to do good unsupervised visual learning without an artificial retina with lots of minute motion detectors.

budes matpicu says:

oh boy, these "philosophers"… science took everything from them, and they clearly feel they are losing their jobs… so what to do? After hundreds and hundreds of years of talking about physical stuff they resorted to the language that kept them alive in the 20th century and now they even this is gone… so the last bastion is… yes, consciousness, and even this last sand island is shrinking, so they are inventing their own dumb concepts, exactly like dark age monks held disputes for centuries about number of angels on a needletip… Here we go – this is the invention of this "Mr. Bean of philosophy" – THE HARD PROBLEM. And these guys managed to extract millions from the society to run their monasteries (oops, "institutes"), conferences, etc. Just lacking witch burning… although, metaphorically… poor Yann, he got caught by them… does he need it, to be associated with that? Yet another stupidity, innate machinery… why not flogiston?

Robin Aldridge-Sutton says:

This is fantastic!

ivan says:

marcus is a useless sniveling little naysayer. he's done nothing for the field. course he went into psychology lmao. go back to writing hackneyed books about music and stuff dude

The Artificial Intelligence Channel says:

Gary Marcus begins at 10:06
Yann LeCun begins at 34:12

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×