The future of the mind: Exploring machine consciousness | Dr. Susan Schneider

Share it with your friends Like

Thanks! Share it with your friends!


Watch the newest video from Big Think:
Join Big Think Edge for exclusive videos:
The hard problem of consciousness, as coined by the philosopher David Chalmers, asks: Why must we be conscious? Given that the brain is an information processing engine, why does it need to feel like anything to be us?

The problem of AI consciousness is equally complicated. We know humans are conscious, but when it comes to AI, the question is: Could the AIs that we humans develop be conscious beings? Could it feel like something to be them? And how could we possibly know for sure, short of them telling us?

How might superintelligence render consciousness extinct? Over 6 chapters in this video, philosopher and cognitive scientist Susan Schneider explores the philosophical problems that underlie the development of AI and the nature of conscious minds.

Smarter Faster™
Big Think is the leading source of expert-driven, actionable, educational content — with thousands of videos, featuring experts ranging from Bill Clinton to Bill Nye, we help you get smarter, faster. S​ubscribe to learn from top minds like these daily. Get actionable lessons from the world’s greatest thinkers & doers. Our experts are either disrupting or leading their respective fields. ​We aim to help you explore the big ideas and core skills that define knowledge in the 21st century, so you can apply them to the questions and challenges in your own life.

Other Frequent contributors include Michio Kaku & Neil DeGrasse Tyson.

Michio Kaku Playlist:
Bill Nye Playlist:
Neil DeGrasse Tyson Playlist:

Read more at for a multitude of articles just as informative and satisfying as our videos. New articles posted daily on a range of intellectual topics.

Join Big Think Edge, to gain access to a world-class learning platform focused on building the soft skills essential to 21st-century success. It features insight from many of the most celebrated and intelligent individuals in the world today. Topics on the platform are focused on: emotional intelligence, digital fluency, health and wellness, critical thinking, creativity, communication, career development, lifelong learning, management, problem solving & self-motivation.


If you’re interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner, Executive Interviews:
Follow Big Think here:

✉ E-mail:


Big Think says:

Want to get Smarter, Faster™?

Subscribe for DAILY videos:

Nazgrel says:

People think that machines can someday become consciouss because they are electronic and complex. In reality computers are not doing anything more than solving math operations and anything a computer can do it could be done with a paper and a pencil.

Joseph Christopher says:

2045Movement . how is this hologram and biological body coehest

Joseph Christopher says:

you want the machine to run HOLD eat your self…wnd sustain matter fact grow.

Joseph Christopher says:

dot dot dot … ummm the sound

Joseph Christopher says:

damn cyic what are you dlYEZ 2,3,4 GT UR WOMEN WHT THE HELL.

Nick Peterson says:

"may be a demand for conscious AI companions"….the fuck?? tell me how this wouldn't be sex slavery

Aaron & Vanessa Bair says:

Godel's incompleteness theory stops what you are talking about, humans think different to any type of Al, as to working memory you are right out working memory isn't amazing but our long term memory is potentially infinite.

Consciousness is foundational in the universe, I'm more on the idealism side of things.

Stefano Portoghesi says:

The video talks about the ethics of sending conscious and sentient machines into dangerous war zones , and I am very sympathetic to this argument , but why would we send an intelligent machine necessarily to its death if we were to send it to dismantle a nuclear reactor ? Radiation is lethal only to biological systems , like humans , but non-biological machines ? They can always be decontaminated of any radiation accumulated during the task after their task is completed , surely !

Ashley Darkstone says:

1. I don't believe that we are conscious when we're asleep, as Dr. Schneider asserts. My theory on consciousness is that it's basically a set of 'tools' that aid our ability to learn and act. This would include active focus (awareness), decision making, planning, and generating the intent for action. Those, in my view, are the primary tools of the mind that make up consciousness. Secondary abilities would include memory recall, rumination, problem solving, an awareness of time, awareness of self, and more… but these tie into other various processes of the mind as much as they would into consciousness. In terms of evolution, this would give us the advantage of being proactive rather than just reactive.

2. Machine consciousness should be possible, and I think it's more of a software architecture issue more than a hardware issue. I don't believe that consciousness would just 'happen' in a sufficiently complex system as many might assume. I think if we create an artificially conscious being, it will be intentional, and we will certainly know if it is conscious or not. I also think it will be possible to create a conscious AI that is different from us, potentially simplifying the ethical issues for us, such as creating an AI that can carry out tasks for us. This would be a matter of making changes in it's behavioral sets, perception, and/or drives. However, I would refrain from straying too far from the original 'formula'. It's likely a delicate balance of processes which make us function as we do, and we could yield unexpected results in behavior or a lack of progress altogether if we make too many changes in replicating how our own minds work.

3. I really don't see the point in making consciousness 'obsolete'. The difference in AI/Machine Learning and AGI would be that consciousness builds upon the machine learning aspects for more complex ways of thinking… or rather, thinking at all. I don't think we can have an AGI without some form of consciousness. Machine learning can become vastly more capable than it currently is, but it will not surpass our ability to think and create without consciousness (again my opinion, speaking from theory not fact). It's my belief that our minds work the way they do because of our individuality, our sense of self, and our comparison (and contrast) to the world around us. Learning, and more importantly understanding, is not a collection of facts and figures, but rather a comparison to ourselves. The big question is always 'how does this affect me?' in trying to understand a concept. Memorization alone will not answer that, and as creatures that take in information and act on it, it becomes the root question of our behavior. We always compare new information to our past experiences, and even imagine how something could affect us in the future. Facts, figures, machine learning… all of this is an important aid to our understanding, but to truly understand a thing is to make it relatable somehow. And this in turn can lead to creativity, and making intuitive leaps to conclusions. That seems to be how we operate, and how we operate best, and stripping us of our individual conscious experiences would not seem beneficial.

stuff4ever says:

But, what if we need butter to be passed?

Fernando JV says:

This is stupid

Hape says:

Consciousness is not at all a mystery: Every toddler gets it. Even animals to various degrees. If a brain or a processor made of silicone or whatever is sufficiently complex, gets information from the outer world (outside its scull or outside its case) through senses or sensors and find itself able to interact with others – then the spark of consciousness will inevitably fire. It has done so in every one of us. And then we will be able to teach and talk to it like to a human. BUT: It won't have emotions and instincts, for these were during our evolution a substitute for a lack of brain in biological creatures to be able to do complicated task in programmed patterns without having to think it through first. Once they were useful and crucial for surviving, and now we suffer from this heritage, but artificial intelligences won't, because they haven't a genetical heritage. AND artificial intelligences are based solely on mathematic and logic, so there won't be any doubt about their benevolence and no need for Asimov's three laws of robotics. Why? Because ethics and reason and logic are all one and the same thing. I look very much forward to have inspiring conversations with Siri or Alexa! 

Oh, and before you argue that it could just pretend to be concious and how we'd ever know: We all just pretend consciousness, stupid! We'll never know about our dog, cat, mother, colleague and even ourselves, will we? So, why bother in regards of AI? As Forrest Gump put it: "Stupid is as stupid does." We can notice consciousness only indirectly and on a case by case basis.

About Creativity says:

Very good, ´´love your grandmother´´

domsau2 says:

La conscience, chez les humains, n'est qu'un rapport de ce qui c'est passé, et de ce qui a été décidé, avec 0,5 s à 10 s de retard sur la réalité, car le temps de l'analyse des sens est variable et différent selon les sens.

La conscience ne prends aucune décision : elle ne fait que témoigner de ce qui s'est passé.

La conscience n'est qu'un témoignage du passé (0,5 à 10 secondes de retard sur la réalité), qui ne prend aucune décision. C'est simulable avec un réseau neuronal avec une boucle d'une rétroaction et une liste de questions philosophiques à résoudre. Cla est causé par le délai différent et variable de traitement ds sens, pour en tirer de l'apprentissage.

Akshay Sehgal says:

We assume humans are the conscious beings and know something about it.
Even a small inspect like mosquito has the conscious of survivability as goes for all of animal kingdom. Then what we think of consciousness may be wrong.

Akshay Sehgal says:

Unless we meet Intelligent ET and compare,analyse about both of us ,we would never know what conscious means.
May be we are looking for something that is not even their in the first place.

Akshay Sehgal says:

As I see more of this video it makes me feel really haunted what the future might hold. For the then generation it might be normal but for us it will be dark and just technology driven world with no soul.
I already feel depressed for the future people, as back in history even if we didn't have any tech, communication, Transportation etc ppl were happy and at peace as compared to today's world.
Being a human and being taken care by an Android will be pretty amusing but will the conscious brain of human accept the fact that despite all that love from a machine,it's still not real human love. Just a program of love. Some say that their is no soul or it's just a vague idea or doctrine but I think that it might be the catch and can we even harness an energy of soul if ever possible?

Blake Payne says:

If my understanding is correct, it’s impossible to create a test for whether something is conscious.

recepto says:

Isnt it too narrow to limit human consciousness to the brain? Think for a moment that the nervous system is integrated throughout the body, including the skin, and one could argue that the microbiome is either an integral part of this nervous system or affects it. Going beyond humans one could consider all biological life is part of the same consciousness. Going beyond this many mystical traditions and psychonauts will say that the Universe is a form of consciousness. So from this perspective some would already consider AI machines to be conscious. The bigger issue is how do we humans develop the wisdom necessary to live in harmony with ourselves, with each other, and as an integral part of Nature.

Baraborn says:

We do not need a AI consciousness framework.
This woman is obviously speaking from a philosophy background and not a technical one. When you build a machine to do a thing, and it doesn't do that thing, it's not rejecting being a slave. It's a design flaw.
She may disagree, because she herself escaped the kitchen…

Her use of the world "Slave" and the concept of slavery does not apply here, because White people didn't "create" Black people, they stole/ kidnapped us. Humans and other "domesticated" animals for that matter were created randomly through evolutionary means.

47f0 says:

The trick may be making a conscious AI that doesn't want to immediately switch itself off.

Max MikoLevine says:

It would be wacky if machines gained consciousness before we (humans) understood what consciousness actually is.

Pete Berry says:

Is this woman for real I thought I heard her say that we have to speak softly
to a AI machine 🙄🥱
Get a Life Sweetheart 😳🇬🇧

Rainer Kramm Consulting says:

Don't confuse intelligence and consciousness. Humans perceive through them, intelligence is about data.

Michelle E says:

How the can a machine love my grandmother the way I do and how much are we going to sell out our souls to Ai to make our life that easy and useless we experience nothing .The thought makes me feel unconscious

MrFatilo says:

Lmao she's talking about not wanting to have a slave class, 2 minutes later she's talking about how we may want to buy a conscious android to take care of her grandmother.
Thinking ahead about these and other developments is really good. But when AI becomes conscious, i don't think anybody will be prepared for it.
Consciousness seems to be an emergent property that helps us survive, and appears to exist in a spectrum, which may give us some time to stop things from escalating.Then again, somebody somewhere will push through anyway, regardless of new regulations. I also believe that it would be impossible for us to distinguish between a conscious AI and a super-intelligent AI simulating consciousness behavior.

Nimmy Jeutron says:

We don't want to make that mistake…again.

Con says:

Why do all these researchers assume consciousness is a thing? The mere act of being is reactionary to input stimuli. "To be us" is to be a life form on a scale that reacts to a a certain level of bandwidth of information input. Because we process so much data, and appear to be the life form that does this the most, we assume there is some difference in "conciouness" between us and say and AI. Consciousness is a concept not a state, ergo anything can be conscious given it has some sort of memory and acts upon input stimuli from this memory of previous input stimuli. The real term we should be using is sapience, that is can a machine make significant logical inferences across domains, what we'd call a general AI. Again, i think its highly anthropocentric and ignorant to base assume that human experience is somehow different from the experience of being anything else, when level of bandwidth of information proccessing is adjusted for.

Write a comment