As Artificial Intelligence (AI) becomes more prevalent and powerful in our society, it becomes essential to look to its moral consequences. How can a machine become moral? How do some of the deeper questions of morality, especially those surrounding religion, evolution, and reason factor into this equation? This talk explores these and other questions in the hope of motivating further research on a project that has the power to doom or save the human race, as we know it. Devin is Chief Technology Officer at the DGC Group, providing gamification technologies and analytics to large corporate clients. He regularly employs machine learning techniques as part of the data analytic services and developing on an online gamification platform. He graduated from Austin College in 2009 with a double major in Philosophy and Religion. After graduating, he spent six years in Tibetan areas of China working for a non-profit, teaching English at a University, and developing an English Training School. Devin is currently pursuing his Master’s Degree at Columbia in Machine Learning, and regularly studies the philosophical and technological implications of Artificial Intelligence. Devin is especially drawn to non-western (especially Buddhist) philosophical insights of the mind and how these may have relevance in the development of Artificial Intelligence. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

So we've talked a lot in this series about how computers fetch and display data, but how do they make decisions on this data? From spam filters and self-driving cars, to cutting edge medical diagnosis and real-time language translation, there has been an increasing need for our computers to learn from data and apply that knowledge to make predictions and decisions. This is the heart of machine learning which sits inside the more ambitious goal of artificial intelligence. We may be a long way from self-aware computers that think just like us, but with advancements in deep learning and artificial neural networks our computers are becoming more powerful than ever.

Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios

Want to know more about Carrie Anne?
https://about.me/carrieannephilbin

The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV

Want to find Crash Course elsewhere on the internet?
Facebook - https://www.facebook.com/YouTubeCrash...
Twitter - http://www.twitter.com/TheCrashCourse
Tumblr - http://thecrashcourse.tumblr.com
Support Crash Course on Patreon: http://patreon.com/crashcourse
CC Kids: http://www.youtube.com/crashcoursekids

Human-monitored security feeds are prohibitively expensive; why not get a Deep Science AI to watch your cameras and detect threats as they happen?

Subscribe to TechCrunch today: http://bit.ly/18J0X2e

TechCrunch is excited to announce the 19 startups pitching in the Startup Battlefield for TechCrunch Disrupt NYC 2017. Over the next three days on the most prestigious tech stage in the world, the Battlefield teams will compete for $50,000 and the coveted Disrupt Cup.

Watch more from Startup Battlefield here: https://www.youtube.com/playlist?list...

TechCrunch Disrupt is the world’s leading authority in debuting revolutionary startups, introducing game-changing technologies and discussing what’s top of mind for the tech industry’s key innovators.

AI: Neurons are learning by trial and error. Prof. Jürgen Schmidhuber explains the development of AI up to now.
More Information on digital responsibility: www.telekom.com/digital-responsibility.

Science Documentary: Augmented Reality, Nanotechnology, Artificial Intelligence

Wearable computers have made a lot of news recently. With Google glass, Meta's space glass, etc. But the question is, how can we make them cheaper, smaller and increase battery life. Infinity Augmented Reality has been trying to solve that very problem. They have provided the the engine that translates your physical world into your virtual world.

Normal cameras are unable to sense the size of objects which they are photographing, so Occipital has created the PrimeSense 3D sensor, which is a structure sensor that gives metric distances to the surfaces of objects. So instead of sensing colors, you sense distances.

Uses for the PrimeSense 3D Structure Sensor include, real estate applications, crime scene recontruction, 3D Scanning of people and objects, and augmented reality gaming. The PrimeSense structural sensor displays the differences in distance by displaying different colors. It can also scan a 3D image that can be sent to a 3D printer, or scan an image that can be interacted with in a video game.

Large Scale Production of High Quality Nanomaterials

As the worlds population grows, the amount of people requiring health care is growing as well. Nanotechnology has promised to provide help in this area. There are many nanomaterials available that have very interesting properties, and these nanomaterials are currently being made in the lab. But since these nanomaterials are so small you need lots of them in order for them to be of any use. Nanomaterials are created using either the top down method or the bottom up method. One way is the chemical vapor deposition method, by which nanomaterials can be produced fairly quickly, you need heat, a carbon precursor, and a metal catalyst. But we need to be able to control the production of nanomaterials or they would be useless. Researchers have been successful in controlling nanomaterials like graphene. Various factors influence the formation of nanomaterials. Graphene and copper is a standard method. Filling carbon nanotubes with magnetic material could be useful in the medical field for drug delivery. And by combining conductive materials with insulating materials, you could create efficient electronics, or sensors, etc.

Artificial Intelligence and Robotics

In creating artificial intelligence, scientists have found that what they thought would be simple to create is actually quite hard, and what you would think would be hard to create is actually quite simple, since it is easier to replicate a great chess player than a small child. Combining the collective intelligence of humans and machines, or multiplicity is the idea of many people working together with groups of machines to solve problems.

Science Documentary: Augmented Reality,Virtual Reality,Wearable Computing
https://youtu.be/Xg4pKgXSJho

Science Documentary: 3D Printing, 3D Imaging, Ultra Fast Laser Imaging Technology
https://youtu.be/2ajmnPEhtJQ

Science Documentary: Genetics, Robotics, Quantum Computing, Artificial Intelligence
https://youtu.be/C5rEJURKgdM

Science Documentary: Graphene , a documentary on nanotechnology and nanomaterials
https://youtu.be/IUrqyuw-6Iw

Science Documentary: Nanotechnology,Quantum Computers, Cyborg Anthropology a future tech documentary
https://youtu.be/sCLnHKl0GT4

Science Documentary:Perfect lenses,smart textiles,biomedical sensors a documentary on nanotechnology
https://youtu.be/waRH1o0JOjs

Language. Easy for humans to understand (most of the time), but not so easy for computers. This is a short film about speech recognition, language understanding, neural nets, and using our voices to communicate with the technology around us.

Richard Socher (PhD Stanford) talks about his work in Artificial Intelligence (AI) at Salesforce.

This video was designed in conjunction with award-winning producer Patrick Sammon (co-producer of “Codebreaker”) to explain the benefits of pursuing a PhD in CS. This video showcases a young researcher with a PhD who is now working in industry as they talk about what compelled them to pursue a doctorate and how they are using their advanced training in their work. While many undergraduates understand that a PhD is needed for a position in academia, this video demonstrates how a PhD can be useful in industry as well.

During Uber Engineering’s first Machine Learning Meetup on September 12, 2017, Franziska Bell explains how Uber’s data science platforms to increase the effectiveness and efficiency of our products and operations. Examples include our machine learning-as-a-service platform (Michelangelo), forecasting platform (e.g., hardware capacity planning), and anomaly detection (e.g.,for predicting extreme events).

Read more about Michelangelo on the Uber Engineering Blog: https://eng.uber.com/michelangelo/

Read more about how we predict extreme events with neural networks on the Uber Engineering Blog: https://eng.uber.com/neural-networks/

Read more at BigThink.com:

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

I think right now everybody is already perceiving that this is the decade of AI. And there is nothing like artificial intelligence that drives the digitization of the world. Historically artificial intelligence has always been the pioneer battallion of computer science.

When something was new and untested it was done in the field of AI, because it was seen as something that requires intelligence in some way, a new way of modeling things. Intelligence can be understood to a very large degree as the ability to model new systems, to model new problems.

And so it’s natural that even narrow AI is about making models of the world. For instance our current generation of deep-learning systems are already modeling things. They’re not modeling things quite in the same way with the same power as human minds can do it—They’re mostly classifiers, not simulators of complete worlds. But they’re slowly getting there, and by making these models we are, of course, digitizing things. We are making things accessible in data domains. We are making these models accessible to each other by computers and by AI systems.

And AI systems provide extensions to all our minds. Already now Google is something like my exo-cortex. It’s something that allows me to act as vast resources of information that get integrated in the way I think and extend my abilities. If I forget how to use a certain command in a programming language, it’s there at my fingertips, and I entirely rely on this like every other programmer on this planet. This is something that is incredibly powerful, and was not possible when we started out programming, when we had to store everything in our own brains.

I think consciousness is a very difficult concept to understand because we mostly know it by reference. We can point at it. But it’s very hard for us to understand what it actually is.

And I think at this point the best model that I’ve come up with—what we mean by consciousness—it is a model of a model of a model.
That is: our new cortex makes a model of our interactions with the environment. And part of our new cortex makes a model of that model, that is, it tries to find out how we interact with the environment so we can take this into account when we interaction with the environment. And then you have a model of this model of our model which means we have something that represents the features of that model, and we call this the Self.

And the self is integrated with something like an intentional protocol. So we have a model of the things that we attended to, the things that we became aware of: why we process things and why we interact with the environment. And this protocol, this memory of what we attended to is what we typically associate with consciousness. So in some sense we are not conscious in actuality in the here and now, because that’s not really possible for a process that needs to do many things over time in order to retrieve items from memory and process them and do something with them.

Consciousness is actually a memory. It’s a construct that is reinvented in our brain several times a minute.

And when we think about being conscious of something it means that we have a model of that thing that makes it operable, that we can use.

You are not really aware of what the world is like. The world out there is some weird [viewed?] quantum graph. It’s something that we cannot possibly really understand —first of all because we as observers cannot really measure it. We don’t have access to the full vector of the universe.

What we get access to is a few bits that our senses can measure in the environment. And from these bits our brain tries to derive a function that allows us to predict the next observable bits.

So in some sense all these concepts that we have in our mind, all these experiences that we have—sounds, people, ideas and so on— are not features of the world out there. There are no sounds in the world out there, no colors and so on. These are all features of our mental representations. They’re used to predict the next set of bits that are going to hit our retina or our eardrums.

I think the main reason why AI was started was that it was a science to understand the mind. It was meant to take over where psychology stopped making progress. Sometime after Piaget, at this point in the 1950s psychology was in this thrall of behaviorism. That means that it only focused on observable behavior. And in some sense psychology has not fully recovered from this. Even now “thinking” is not really a term in psychology, and we don’t have good ways to study thoughts and mental processes. What we study is human behavior in psychology. And in neuroscience we mostly study brains, nervous systems.

Elon Musk, Stuart Russell, Ray Kurzweil, Demis Hassabis, Sam Harris, Nick Bostrom, David Chalmers, Bart Selman, and Jaan Tallinn discuss with Max Tegmark (moderator) what likely outcomes might be if we succeed in building human-level AGI, and also what we would like to happen.

The Beneficial AI 2017 Conference: In our sequel to the 2015 Puerto Rico AI conference, we brought together an amazing group of AI researchers from academia and industry, and thought leaders in economics, law, ethics, and philosophy for five days dedicated to beneficial AI. We hosted a two-day workshop for our grant recipients and followed that with a 2.5-day conference, in which people from various AI-related fields hashed out opportunities and challenges related to the future of AI and steps we can take to ensure that the technology is beneficial.

You can find an audio balanced version of this panel here:

https://www.youtube.com/watch?v=OFBwz4R6Fi0

For more information on the BAI ‘17 Conference:

https://futureoflife.org/ai-principles/

https://futureoflife.org/bai-2017/

https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/

Data science holds the potential to impact our lives and how we work dramatically. Despite its promise, many questions about data science remain. How real is this emerging discipline? What opportunities and challenges does it present? How can Stanford nurture data science in research and education? Watch the video and hear some of Stanford's thought leaders debate the answers to these questions.

Panel Speakers:
- Hector Garcia-Molina
- Vijay Pande
- John Hennessy
- Euan Ashley

Stanford's YT Channel:
https://www.youtube.com/watch?v=hxXIJnjC_HI