Holy Grail of AI (Artificial Intelligence) – Computerphile

Share it with your friends Like

Thanks! Share it with your friends!


Audible free book: http://www.audible.com/computerphile
Why can't artificial intelligence do what humans can? Rob Miles talks about generality in intelligence.

Sean Comments/Questions (For those who can't hear him clearly)
11secs: "This was the hill climbing algorithm?"
2min 40sec: "recently with Professor Brailsford we did the idea of The Turing Test, so that strikes me from what you're saying that that's a very specific domain: pretending to be a human talking?"
4min 2sec: "is that like 'humans have been changing the world to meet their needs?' "
4min 40sec: "but on a bigger scale, as you say on a grander scale, building a dam, and er irrigating a field, and putting a pipe to your house and allowing you to have a tap {fawcett} is doing the same thing but on a grander scale."
6min 39sec: "all these dimensions, if you try to brute force infinite dimensions you're gonna fall over pretty quickly?"
6min 45sec: "change the world" (in ref to him picking up the drink!)

Retro Z80 Computer: https://youtu.be/OtpaY8VD52g
Hill Climbing Algorithm & Artificial Intelligence: http://youtu.be/oSdPmxRCWws
The Turing Test: https://youtu.be/Qbp3LJvcX38
Arduino Uno: https://youtu.be/b4z1zkmo1BE
Rabbits, Faces & Hyperspaces: https://youtu.be/q6iqI2GIllI

Thanks to Nottingham Hackspace.


This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: http://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran's Numberphile. More at http://www.bradyharan.com


Ted Kraan says:

When I still was studying Chemistry I encountered another fascinatingly smart fellow student that had done some A.I. studies before he switched over to Chemistry. He complained about it; How professors there went on about trivial philosophical matters and not teach anything concrete that mattered.

These 'AI' examples are just agents. The chess game opens and is over till you won or lost. The car drives from point A to B. A sentient program would consist of a superloop with several subloops under it activating agents like; playing-chess, driving-car, associative-functions, cognitive-functions and it should be able to add and remove loops where needed. It should also have the freedom to modify/optimise it's own code.

The example at 5:45 shows actually how much white noise there is in this field. No human knows everything about the world, does need to know it. The more I know the more I realise how little I know. But do I need to know? Is the air I'm breathing not toxic? Isn't the ceiling about to collapse on me? Would a sentient program have to be pre-occupied with silly things like that?

I don't see how this field is going to move forward with trivial philosophical matters obscuring it.

Orion D. Hunter says:


that guy who would not share his name says:

"I am using my intelligence to optimize the world"

Jan Haverkamp says:

what if the internet self becomes an a.i because of an a.i that can collect and correlate data from any kind of information?

One above all says:

wht does he keep wearing that wig?

3ICE says:

Brute force, not break force. Subtitle error.

Simon Vercoe says:

Google self driving car probably could win at Jeopardy let's be real haha

crunch time says:

Misa sa poong nazareno

JCmmmhm says:

So are other non human animals general intelligences? Can that they be defined as that or is GI purely a term used to describe humans and human like intelligence processes? Is there perhaps a blurred line that links human intelligence and the intelligence of say a monkey or dolphin, whale etc?

logik logik says:

1:401:46 hehehehe AlphaZero has something else to say.

t 1 01 says:

This guy is a good teacher.

jan oxley says:

Ive only just realised the narrator isn’t Brady

Sion says:

1:30 "Ok, mr Chess AI, think of this car as the Pawn, unless it reaches an intersection, then it's temporary upgraded to a Tower…" ;P

Adam B says:

Would a truly general intelligent machine lie and get things wrong?

Lambert Brother says:

2:31 And that's why I have to Superman IV: The Quest for Peace.

Jon Williams says:

Captcha is actually a type of Turing test, which at this time is quite effective.
And I like this guy. Brilliant.

Rasgonras says:

The problem with an AI having to brute force inifinite world states is a theoretical one, not a practical one.
It would just cut down the amount of information available and package it into handle-able data, like we do. We do not need to know every little blade of grass to change a garden to our liking, and AI will be able of the same big picture thinking.

Jay Bingham says:

And in just over 2 years since this upload, AlphaZero has just cracked the AGI door. Buckle up.

Midnight Commander says:

On first glance I thought the thumbnail was of a mushroom cloud.

Stefan Reich says:

> Why can't artificial intelligence do what humans can?

Because we're making that system now.

Tasty Rainbro says:

what if far in the future a mad man programmed an ai without any security lines in the code?
Because security becomes vastly important with higher class ai. And it is up to the programmer to secure its invention or not.
so mayby we all will be end in an ai-war, like we just do in analogy to the viruses and the anti-virus-programmes to secure a single software.
I don't wanna live in that particular time, where any single human gets the ability to harm the rest of the world due to his own knowledge.

Stefan Reich says:

I am making the future AI. It includes lots of new ideas.

Arthur Khazbs says:

Aww, that snack vending machine in the background though :3

awsomeabacus says:

Does the duck have a name? Can we name the duck?

frill necked lizard says:

could it be that an AI wouldn't wan't a better AI because this new AI will defeat him so he can't get to its goal

Chris Cheshire says:

When you have a chess, autonomous car, weather prediction, e-learning and any other AI's all on one network such as the internet. Is that not then a general intelligence? If you look at the internet as a whole?

Paul Berger says:

I'm pretty sure X3 Terran Conflict (great game) already shows that Artificial General Intelligence (AGI) are bad and will try to terraform Earth. Which is bad.

Jancio Kowal says:

Ai for now is just action and stimuli response with priority to most important ones.

Write a comment