The future of artificial intelligence

Share it with your friends Like

Thanks! Share it with your friends!


The future of artificial intelligence, by Murray Shanahan, Professor of Cognitive Robotics, Imperial College. This presentation was part of the event “State of the Future 2015-2025, held at Bloomberg HQ London on 13th November 2015.

The session was introduced by David Wood, Chair, London Futurists.

For more details about this event, see

Help us caption & translate this video!


Mr. Numi Who says:

Very sad that this topic is only receiving a paltry number of views, though this video, I know, will be shallow and clueless, and sheer torture for me (but more people should have been curious)…

I'm going to jump straight to human-level AI (I'm not interested in anything less than fully-independent, philosophically enlightened AI) to see exactly where the professor is on the topic, then I will tell you exactly why he is clueless (and his cluelessness will be a given, since humanity itself is still universally clueless, and I will tell you exactly why there, too, at the end of this comment)…

27:31 The first human-level AI goal the professor mentions is "to be able to learn, do, and invent new tasks, and an enormous variety of tasks." How clueless is this? Completely clueless. It is not only vague, it is small-minded. He has no clue as to what the tasks will be, or what goal they will be working for, and it is not the professor's fault – philosophers and religions have failed him (enter me). What should the purpose be? The purpose should be the Ultimate Goal in Life, which is to secure higher consciousness (currently human-level). and, with less priority, potential higher consciousness, which includes all lower biological life and anything humans can create, in a harsh and deadly universe.

28:48 The next goal the professor mentions is the goal of having 'common sense', which he defines as having the ability to predict the consequences (social and physical) of actions in everyday life. This is a fine goal, but note how it is completely blind and unguided. To put it metaphorically, it is fine as a fuel, but it is no guidance system. My Ultimate Goal is a guidance system for all of life, and my ultimate answer to Why Bother? is the Ultimate Fuel.

What I am generally pointing out is the role of philosophy in AI (and in human existence) – it is at the very foundation, and it is the central problem with humanity right now (not economic, not political, not social).

29:13 The professor gives a trite and clueless (which is where his level of thinking is, sorry to say) example – that of flinging the coffee table in the air and being able to predict the outcome, which he cluelessly and vaguely claims would be a 'bad' thing (without venturing into why, and he did not because he could not) (enter my philosophy, which I've tentatively titled The Philosophy of Universal Survival, for the Space Age, no less, which explains the true nature of good and evil) (for the first time in human history).

30:11 He slides into 'creativity', which he says is the basis for the ability to learn (but he made that up).

31:27 He is right about 'concepts'. Before you can use a tool, you need to realize 'tool' as a concept. An example is language, a tool for communication. The professor only has a vague notion of the importance of the 'tool' aspect, when he says the AI should be able to 'apply concepts'. He did not have the mental ability to give you specifics, so I did.

34:55 He mentions Nick Bostrom's book, 'Superintelligence'. I read it – there is nothing intelligent about the humans or the AI in his book – it projects present-day human cluelessness (read 'stupidity') into the future, and, incredibly, onto future 'super' intelligence.

35:35 The professor offers two misguided aspects of 'super-intelligence' – 1.) working faster (he meant 'thinking' faster). An unenlightened AI will only do stupid things faster. 2.) He mentions more memory, An enlightened AI will only contain more information cluelessly. 3.) He mentions 'self-improvement', but he fails to mention 'toward what? Toward crime? Clueless.

Note, most importantly, the professor does not mention 'enlightenment' when describing 'intelligence'. If you are not enlightened, then sorry, you are not intelligence, only cluelessly clever, and a danger to all of life everywhere.

36:20 By only thinking on a clichè level, the professor assumes that any future AI will not be biological, which is not the case. An alternative to a non-biological platform is a platform based on molecular attraction, molecular self-assembly, and even molecular replication, powered by thermal energy (courtesy of the Big Bang).

39:14 I like his term, 'celebrity soundbite'.

39:29 The professor will probably mention the paperclip scenario (not thinking beyond a superficial level), which is a bad analogy – such an AI is in no way enlightened – it reflects a clueless human creator… and yes, he does mention the paperclip scenario.

43:44 Sums up the professor's depth of thinking on AI (i.e. shallow): "Will it be safe? We really don't know." "What should we do? We need to start thinking about these things." The professor inadvertently admits that he has not thought about such things (luckily you have me).

Why are humans universally clueless? Because they have not identified the Ultimate Value of Life yet (there IS one, and it only stands to reason) (and I've given it to you already – for free, no less, thank you) (not that you'll thank me), or how it relates to the Ultimate Goal of Life, or how that relates to the determination of Good and Evil, or how that relates to enabling worthwhile lives and relevant civilizations, or how they relate to The Great Struggle, or where assumptions, generalizations, and classifications fit in, or where verified knowledge fits in. Sorry, humans. You are still universally clueless. Read my philosophy.

Kevin Bacer says:

Great video!
If you are learning AI and need some training data this is a great place to find it

Simeon Banner says:

Bloomberg… for F sake. Whatever the future the parasites will be there finding out how they can monitories society. "Knowledge workers". What massive hubris. Not under the British education system. Worth remembering that the Human Genome project was full of hubris and everybody thought vast amounts of money could be earned from that but it didn't pan out that way.

Shane Nolan says:

i have detailed what a fully automated society would be like show it to as many people as possible if you can

divine nature says:

A very nice inforamtion in this review articles ,
Also check this :

Marcus Aurelius says:

ex machina.

cm m says:

300 games for space invaders actually… not zillions.

Robert Galletta says:

Show the Google computer reruns of the Andy Griffith Show.

Real Common Sense says:

Check out "Dangerous Things (You should not fool around with)"

Fest Theory says:

FINALLY SOLVED (HAL 9000 is between the following lines too) : The human evolution ( 7 million years ) must perform / accomplish the evolution of intelligence, but I have found only "the evolution of emotions". These three processes intersect at one point – baby / human infant that is incapable for independent survival for many years. That is not an evolutionary mistake, on the contrary, that is the key element of my research. By observing it’s mother’s behavior, a process called MSP /multi self-projection passively occurs in baby’s brain when child perceives guardians body as his own. That way infant’s CNS immediately learns the shortest way to get something done, which enables the creation of many more similar thinking processes till the moment when a minimal number of thinking processes are required in order to effect of self-consciousness arise.
To connect all that I have mentioned with a huge number of scientific data (Denisovans, Homo naledi, Scientific Adam, Mitochondrial Eve, autism, speech, pleasure in the presence of fire, dreams…) required membership in the Mensa organization… The biggest picture (the framework) for all scientific data (even A.I. because start, origin of original, in making SAI/AGI is crucial / what has been missing) is FEST theory. HAL 9000 is between the following lines too.

Write a comment


Area 51