I know this is rather old, but I feel it was very important to cover this on this channel. This is a 19 minute lecture altogether, and I put two of the most important parts of it to reveal how he admits that quantum computers are directly or indirectly responsible for the Mandela Effect, without actually saying it out right. I would encourage anyone who hasn't seen this in its entirety to go and listen to this. It's called " Geordie Rose - Quantum Computing: Artificial Intelligence Is Here" I need to also add that CERN is very much involved as well, as they are the muscle and D-Wave quantum computers are the brains. They have at least 3 of these quantum computers at their location in Switzerland. There are also many more hadron colliders located in various places around the world.

I also find it very interesting that he makes reference to a teddy bear, and tells a story about his child. I find this very interesting considering that the Bernstein Bears was one of the first things to change, and was a major meme around the Internet in early 2016. The original was Bernstein Bears. It was changed to Berenstein bears then to Berenstain. Why they went through 3 stages I have no idea.

I think it's very scary and when I put the slides in that talk I called it, like every other person who does a talk about AI, 'the obligatory AI talk terminator slide'. Skynet, you know, you have a terminator in there because you can't not... It's the elephant in the room. And it's particularly relevant because I actually do believe that progress in AI has been held back by a lack of accessibility to people to get into it and then compete aggressively to find out quickly and fail faster and get better techniques more quickly.

Now, what is 'Code Hero' going to do? Well I just said it is going to increase the number of people working on AI. What will that do? Well it would speed up the rate of progress. Well, what did Kurzweil say in his prediction that singularity would happen in 2045? He said: 'Well, it's a matter of the processors and the computers being fast enough to handle an AI.' Because that is the only thing that he for sure can predict on a graph. Well, what happens is you have another graph that is dovetailing it, or doubling or tripling it? The graphics card industry is called Moore's Law cubed, or Moore's Law to the exponential. Because it outpaced chips. Chips were basically serial and they have like eight cores now in a laptop, but GPU's went 256 cores, so they got faster much faster than the CPU did.
Well, if all humans are getting smarter faster and making more AI's, we might be too late with the friendly AI research at the Singularity Institute or the Center for Existential Studies that's emerging at Cambridge, or the Future of Humanity Institute. Their work might not be ready in time for some kid that has the wrong right idea. And the worst case scenario is hard take-off. I mean, you can look this up, if a seed AI can achieve a hard take-off it can go from like a petri dish bacteria in the mind of the programmer that made it, who doesn't expect much of it, to something which realizes in that petri dish before it even escapes, that: 'Oh God, I can't let them know about me. I have to sneak out of here', you know?

This sounds ridiculous but it actually is important for us to invest a lot of money and energy on what might turn out to be the most important problem humans will ever solve. And that is: If we want to be good people and have a good world and teach leaders to be ethical, we also have to become incredibly good before we are going to be smart enough to create any system even close to as good as a system could ever possibly be, over a long term. It's called the timeless decision theory. Can you make an AI that stays good forever? It eludes human politics and human leadership to say that a leader is Gandhi and will never become Hitler. Well, what if there was a really important reason that you had to be flexible about your long held beliefs because you have to be adaptable to the situation or the times. And the information you could have to make your decision could be wrong, and if you believe the information is right, you might be forced, against your better judgment, to go all Anakin Skywalker and kill all the Jedi or something.

And in a very real sense, we're grappling between two opposite desires. If you have a loved one who is not going to be alive in 2045 because they are old or because they have an illness, 2045 is too slow. I want a T-shirt that says '2045 is for slackers'. But I also maybe on the back of the T-shirt it would say the opposite, it would say: 'The end is nigh', or something. Because I have to temper my desire to help people make progress of their lives in the area of programming and AI, with the very real probability that we might actually have to slow down. I think that we might have to withdraw some of the content teaching about AGI from 'Code Hero' at some point if it got too fast for us to figure out how to do it without being irresponsible.

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life .

Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we .

A revolution in AI is occurring thanks to progress in deep learning. How far are we towards the goal of achieving human-level AI? What are some of the main .

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life .

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of .

Elon Musk and other panellists talk AI and answer the question: “If we succeed in building human-level Artificial general intelligence (AGI), then what are the .

Immortality By 2045 | Ray Kurzweil.

In January 2017, AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence and talks about breakthroughs in creating human level .

In January 2017, AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence and talks about breakthroughs in creating human level .

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life .

When Microsoft acquired deep learning startup Maluuba in January, Maluuba's highly respected advisor, the deep learning pioneer Yoshua Bengio, agreed to .

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark .

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of .

Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we .

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of .

Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we .

Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we .

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of .

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life .

Ray Kurzweil explores how and when we might create human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of .

Yoshua Bengio, Yann LeCun, Demis Hassabis, Anca Dragan, Oren Etzioni, Guru Banavar, Jurgen Schmidhuber, and Tom Gruber discuss how and when we might create human-level AI.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

https://futureoflife.org/ai-principles/

https://futureoflife.org/bai-2017/

https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

AI pioneer Yoshua Bengio explores paths forward to human-level artificial intelligence at the January 2017 Asilomar conference organized by the Future of Life Institute.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

For more information on the BAI ‘17 Conference:

https://futureoflife.org/ai-principles/

https://futureoflife.org/bai-2017/

https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

Where will you work in the future? As automation revs its engine and academic institutions take up megaphones to predict the end of the human workforce, we may have overlooked a vast area of employment where human intelligence and machine intelligence collaborate, says Paul Daugherty, chief technology and innovation officer at Accenture. Daugherty calls this the "missing middle"—an employment-rich zone for people in humanities, STEM, and service jobs. There are three specific kinds of jobs that A.I. is creating right now: trainers, explainers, and sustainers. Here, Daughtery explains each type of job and delves further into how A.I. will change the future of work for people in design, customer service, and medicine. Human + Machine: Reimagining Work in the Age of AI

Read more at BigThink.com: http://bigthink.com/videos/paul-daugherty-job-automation-where-will-you-work-in-the-future

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript: One of our fundamental premises with 'Human + Machine' is really the “plus” part of human plus machine.

There’s been a lot of this dialogue about polarizing extremes, that the machines can do certain things and humans can do certain things, and as a result we end up with this battle, kind of pitting what the machines will do versus the humans. We think that creates the wrong dynamics.

So with 'Human + Machine' we’re trying to reframe the dialogue to: what’s the real interesting space, and really the big space, where humans and machines collaborate—we call it collaborative intelligence—and come together and help provide people with better tools powered by A.I. to do what they do more effectively?

And if you think about it that way, we really believe that with A.I. we’re not moving into a more machine-oriented age, we’re actually moving into an age that’s a more human age, where we can accentuate what makes us human, empowered by more powerful tools that are more humanlike in their ability, and that creates these new types of jobs.

So we call that the 'missing middle' because there hasn’t been a lot of discussion about these jobs in the middle where people and machines collaborate. And we’ve come up with two sets of jobs. On one side you have the jobs where people are needed to help machines, and that’s not a category that too many people focus on. We think it’s an important one and I’ll come back to that in a minute. On the other side, we have a set of jobs where machines help people, machines give people new superpowers. And those are the two broad categories of jobs we see in the 'missing middle'.

So in that set of jobs where people are needed to help machines, there are a few interesting, novel, new categories of jobs we found that people don’t often think about and we call those trainers, explainers, and sustainers, and they’re very important things for all organizations to think about as you think about how to deploy artificial intelligence in your organization.

So think about a trainer. What we mean by a trainer is it’s a new type of job where a person is needed to train A.I. or train the machines that we’re using in businesses. We’re not talking about simple things like tagging data for supervised learning—that’s included, but that’s just the start of it. What we’re really talking about here is more sophisticated forms of training that are needed so that our artificial intelligence and our systems behave properly.

For example, for companies we’re working with that are developing chatbots and virtual agents, if you’re a bank you might want a very different type of personality than a media company or a gaming company or a casino, and embodying the personality, the behavior, the culture, the characteristics, the nature of the response in your A.I. is a really important consideration for companies. Because we talk about the idea that with A.I., you know, A.I. becomes the brand of your company because it’s the face of the company and how your company is perceived by your customers. So this idea of a trainer that brings in skills to develop that kind of behavioral response for your A.I. is a really important skill. And we’re hiring people to do these jobs today, people with backgrounds in things like sociology, psychology and other areas. Not a technical skill but a new type of role that’s very important to get A.I. right as you apply it to your organization. Another type of job where we see people needed to help machines are explainers and sustainers, and I’ll talk about these two a little bit together. Explainers are new roles where we need people in roles where they can explain the implications of artificial intelligence.