How far are we from general AI? – Marek Rosa's Keynote talk at #HLAI2018

Share it with your friends Like

Thanks! Share it with your friends!

Close

GoodAI CEO, Marek Rosa, spoke at the Human-Level AI Conference as part of the Roadmap to Human-Level AI block. The conference which took place in August 2018 was organized by Good AI. You can read more about it here: https://medium.com/goodai-news/reflections-on-the-human-level-ai-conference-2018-3e78f10e3e13

Comments

FVD 20 maart!!! says:

Is it me or do we get fewer AI updates? If anyone know about good video’s please share link

WrhuthAdaft says:

I've seen this exact video by a different person with the exact same arguments about people wanting to regress to a less cluttered, technologically ruined lifestyle, there's also the saying heh it's a religious belief with no guarantee that it'll happen and to that I say : not an argument against the singularity.
People who believe in the nerd rapture do not believe they're going to live forever or that it'll happen in their lifetimes, they simply hope it will. Saying otherwise is lunacy and to use that as the basis of your argument is straw-manning.
Moores law is not anywhere close to dead, we've merely reached the top of an S curve on innovation, next it'll be either photonic computing or graphene computing, both of which are just starting out and not yet commercially viable technologies. And who can really say the atom is the limit, people said it was absolutely impossible to split the atom and days later it was done, to say you can't make computers on a sub-atomic scale is equally hubris, we simply do not know.
Furthermore I personally believe that vertical progress won't come from humans, AGI could be invented tomorrow and kick off a singularity. Yes singularitarianism might be a pseudo religion but to say that it will never happen is equally crazy as saying it absolutely will within the next 5 years (which I believe).

Mr Penguin says:

Fascinating way to diversify income, I wish both companies well.

Keylanos Lokj says:

We persist on looking at how our own neural networks work, but we forget those might be just the result of some deeper drivers. Those primers maybe are what we should hardcode into AI to achieve one day GAI. Need or better Anage, is what motivates living creatures to display resourcefulness and evolve.

1)Curiosity of the nature of nature. How the physical world works. From chemistry and physics to human physiology and medicine.

2)The need to solve human problems. Since giving it a self-survival worry could turn against us, let's make it an extension of our own worries. To exist for it would be to adress humanity's problems. Better city construction, waste disposal, envirnomental issues, food distribution issues, fighting cancer etc. And all those indivual networks could connect to a higher cloud analysing the compartmentalised data holistically utilising models from several scientific fields.

3)Theology and art. This seems like a counter intuitive and novel approach. But think about it. We build all the admirable monuments of humanity from Stonehedge and Parthenon to the Pyramids and Hagia Sophia, was with the transcending goal of an afterlife. If we make it believe it can reach an afterlife itself, it might be able to "create a soul". The point is not wether a soul exists or not, but if you are motivated to make one manifest with the results produced through that inquiry. It's raison d'etre will be to solve humanity's problems sure, but its teleology, its.. "promised land", shall be abstract and implied. Not "of this world". Since you don't have all the answers and seems impossible to ever do, assume Someone else does. Sounds cruel, but it might be a necessity to help it achieve 3rd level consciouness like us. You will ask, we might have said inquiries cause we already have 3rd level consciousness, so the argument is cyclical. In fact i think that teleological hope is not a human exclusive sentiment and transcends the workings of our species. And if anything you could say its a remnant of our evolution not the evolved trait. (Most atheists will agree here i believe).

4)Auto-debugging, self correcting. It will have its own "immune system" as well as the ability to "learn" from past system failures, bugs and mistakes.

5)Emotions! Trying to emulate how we feel might give it the ability to evolve consciousness. AIEs or Artificially Induced Emotions will resemble a reward/punishment, pleasure/pain mechanism in case of achievements or mistakes and uncalled for deeds. They could be information or memory deleting punishments, or even a literal physical threat inside its circuits. After all, fear of death is the strongest primer on earth, even greater than reproduction or hunger. Why will it care? Because its reason of existence is to adress the above issues of points 1-5. If it ends, it won't be able to do what its made to do. That's were "caring" comes from.

Extension to this would be… a sense of aesthetics. The entirety of human life is a pursuit of Beauty both in the self, in partners, in the arts, and the world around us. Feeling a sense of "euphoria" after completing an orderly task can "motivate" it to work towards even more effective, higher realms of beauty, symmetry and order.

At this point you will say it will already have a "survival instinct". Yes while true, it will be only secondary to its servitude towards humanity and will not put the first above the latter.

6)A more complex programming language. Logos is the beginning of ontollogy. While we have moved towards object oriented languages etc. We might need one conceptual tongue, capable of being reduced to machine-readable script from higher abstract concepts, and understanding symbolism in order to have prospects of evolving. And if it is tied to the emulated emotions part it can have a broader spectrum of "comprehension" than a binary approach could achieve. After all, even our language only poorly describes what we feel and conceive in our heads and bodies. To perceive one's self and ponder about others and the world one needs the linguistic capacity for such abstractions.

Sorry that we got a bit too "philosophical" for the tastes of computer science, but when we are adressing issues like the human consciousness i guess thinking out of the box is the only solution.

This was a purely speculative and theoretical approach but it might sound interesting to those in search of a general AI. Thanks for reading.

thth D. says:

"want to create general AI" Can't set up a mic so we don't hear every slightest breath…. or any other fucking mouth spit, lip smack…

John Doe says:

Humans have hard-wired intellectual skills which evolved over millions of years (plus our emotions and instincts which create drive and direction for us, build on top of that "boot partition", so to speak. It seems AGI will require a substantial hard-wired ability to understand context before it could advance learning by itself. This would be like making a box for the AI, and thus be limiting for it, but it would make a good first order of AGI in my opinion.

Write a comment

*