'In the end' is a long time period; it's a very long time period. Who knows, by then? And you know, these guys who claim that we'll see the singularity by 2030... Dude, I don't believe that at all, by any means, shape or form. Will we see smart machines being able to do smarter things with data? Sure. I think there are all kinds of great opportunities there. But in terms of over the next 100 years, are machines going to be smarter than humans just because some IBM computer can beat humans at - I don't even know - Jeopardy? Nah. I don't find that that's interesting, actually. I think doing smart things with data, doing a lot of analysis and so on... But you know, these are very limited sort of things. Even if you take something that is starting to get people excited using Siri on your iPhone - it feels pretty amazing first. When I tell Siri to book a table at Harvest on Friday at noon, when I can do that, I go like "Wow, that's pretty cool." But computers outsmarting humans? No, not for a long time.

3 Meaningful Minutes: Episode #1. Will Robots Ever Be As Smart As Humans? This video explores some things to consider on the subject.

---------
DOWNLOAD FREE MUSIC from ShatterRed:
http://shatterredmusic.com/free-music/
---------
GET AN EMAIL ALERT WHEN THERE'S A NEW #3MM VIDEO.
(link coming soon)
---------

**********
SUBSCRIBE to "3 Meaningful Minutes! 🙂
https://www.youtube.com/channel/UCsptmqqoe7dBI3n3rlLWIZg?sub_confirmation=1
**********

Get "Let In The Love," the song featured on today's video, on iTunes:
https://itunes.apple.com/us/album/scarlet-rain/id569753954

---------
Previous 3MM Episode:
(There is none, this is the first episode!)
---------
NEXT 3MM Episode:
https://www.youtube.com/watch?v=Ll6wLvpODvw
---------

---------
Main YouTube Channel:
http://youtube.com/ShatterRedmusic
---------
LIKE us on FACEBOOK!
http://facebook.com/ShatterRed
---------
FOLLOW us on TWITTER!
http://twitter.com/shatterredmusic
---------
FOLLOW us on INSTAGRAM:
http://instagram.com/shatterredmusic
----------------
Our Website:
http://ShatterRed.com
------------------
Our Christian Music Industry Blog:
http://christianmusicindustry.com
-----------

Here are some articles for further reading about today's topic:

http://robohub.org/why-robots-will-not-be-smarter-than-humans-by-2029/

http://science.howstuffworks.com/robot-computer-conscious2.htm

More about this episode:
It's a common fear in that finds its way into a lot of science fiction: Will robots/computers become smarter than humans and overpower us? We have devices like Siri and the Amazon Echo which can speak and interact with us. We have films and TV shows like I, Robot; Person of Interest; Revolution; and Terminator that show us what can happen when the cyber world becomes too powerful. Is it inevitable?

There may actually be hope. Consciousness is a very complex thing.

Debate in the comments!

If you're interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner Executive Interviews: https://www.executiveinterviews.biz/rightsholders/bigthink/

If you're interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner Executive Interviews: https://www.executiveinterviews.biz/contact-us/americas/

Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be outsmarted first by fairly dumb AI, says Eric Weinstein. Humans rarely create products with a reproductive system—you never have to worry about waking up one morning to see that your car has spawned a new car on the driveway (and if it did: cha-ching!), but artificial intelligence has the capability to respond to selective pressures, to self-replicate and spawn daughter programs that we may not easily be able to terminate. Furthermore, there are examples in nature of organisms without brains parasitizing more complex and intelligent organisms, like the mirror orchid. Rather than spend its energy producing costly nectar as a lure, it merely fools the bee into mating with its lower petal through pattern imitation: this orchid hijacks the bee's brain to meet its own agenda. Weinstein believes all the elements necessary for AI programs to parasitize humans and have us serve its needs already exists, and although it may be a "crazy-sounding future problem which no humans have ever encountered," Weinstein thinks it would be wise to devote energy to these possibilities that are not as often in the limelight.

Read more at BigThink.com: http://bigthink.com/videos/eric-weinstein-how-even-dumb-ai-could-outsmart-humans

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript: There are a bunch of questions next to or adjacent to general artificial intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mindshare. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there’ll be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.

So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system.So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.

Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.

Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.

AI pioneer and co-founder and chief scientist of Artificial Intelligence startup NNAISENSE Jurgen Schmidhuber recently stated that while machines will eventually be smarter than humans, there is no reason why the emerging technology should be feared.

Jurgen Schmidhuber has been involved in the AI field since the 1970s. In 1997, Schmidhuber helped publish a study on Long Short Term Memory, one of the concepts that ultimately became the roots of AI memory functions. Speaking during the Global Machine Intelligence Summit (GMIS) last year, the AI pioneer stated that he had big dreams for the technology since he first began studying the field. According to Schmidhuber, he wanted to build machines that can teach themselves.

The AI pioneer carried over his vision for advanced AI well into the present day. In a recent statement to CNBC News, Schmidhuber noted that eventually, machines will likely surpass humans in terms of intelligence.

“I’ve been working on AI for several decades, since the eighties basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire,” he said.

Unlike other tech leaders such as Elon Musk and the late Stephen Hawking, Schmidhuber has adopted a more optimistic outlook on AI. Musk, for one, has frequently mentioned the dangers of hyper-intelligent computer systems, to the point of stating that AI could be more dangerous than nuclear warheads.

Schmidhuber, however, disagrees, stating that once AI surpasses humans’ intelligence, machines would likely just lose interest. The AI pioneer added that he and Musk had already spoken about the matter.

“I’ve talked to him for hours, and I’ve tried to allay his fears on that, pointing out that even once AIs are smarter than we are, at some point they are just going to lose interest in humans,” he said.

Schmidhuber believes that there are still concerns about the emergence of hyper-advanced computer systems, however. According to the AI pioneer, the real dangers of artificial intelligence lie not on machines, but on people themselves.

“If there are any concerns, it’s that humans should be worried about beings that are similar to yourself and share goals. Cooperation could result, or it could go to an extreme form of competition, which would be war,” he said.

Nevertheless, considering the pace and direction of AI research today, Schmidhuber remains optimistic. While the pioneer admitted that a portion of AI research is dedicated to making intelligent weapons, the vast majority of studies in the artificial intelligence field is geared towards helping people.

“About 95 percent of all AI research is about enhancing the human life by making humans live longer, healthier and happier,” he said.

In a lot of ways, Schmidhuber’s statements about human-friendly AI research and AI-based weapons rings true. While the Pentagon and countries like South Korea are exploring the concept of weaponized AI, several initiatives, i

Breaking the Wall between Human and Artificial Intelligence:

From the stuff of dystopian science fiction movies to everyday companions – with the rise of ubiquitous mobile computing power, artificial intelligence (AI) is already permeating modern life. As of 2017, deep learning algorithms power our phones’ voice-assistants, recommend the latest movies, and optimise our bike ride to work. AI has been heralded as the new electricity, soon to be found in almost every piece of technology we produce. To the man who has been described as “the father of modern AI”, this is merely the beginning. Although the artificial neural networks of Jürgen Schmidhuber’s team are now in 3 billion smartphones, he considers our current state of AI technology to be in the early stages of infancy. Whereas today’s seemingly smart algorithms are geared towards singular purposes – playing chess, matching love-hungry 30-somethings, or finding appropriate music for cooking – Jürgen’s goal has always been to create a general-purpose AI within his lifetime. His entire career has been dedicated to developing a software that would outsmart him, and though he readily admits that, as of now, the best general-purpose AI is only comparable to the intelligence of an infant animal, he is convinced that it will not be long before we develop systems that are far superior to us. At Falling Walls, Jürgen lays out the state of the art in his field of research and shares his vision of a future in which humans are no longer the crown of creation.

“What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.” Gaurav Sangtani, talked about how technology and artificial intelligence is changing the world. Around all fears how it can impact job markets and society at large and how can we adapt to it and move ahead with this change. Social Worker This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

𝐖𝐞 𝐇𝐈𝐆𝐇𝐋𝐘 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝 𝐰𝐚𝐭𝐜𝐡𝐢𝐧𝐠 𝐭𝐡𝐢𝐬 𝐯𝐢𝐝𝐞𝐨 𝐰𝐢𝐭𝐡 𝐠𝐨𝐨𝐝 (𝐛𝐞𝐲𝐞𝐫𝐝𝐲𝐧𝐚𝐦𝐢𝐜 𝐃𝐓 𝟗𝟗𝟎 𝐏𝐑𝐎 𝐎𝐯𝐞𝐫-𝐄𝐚𝐫 𝐒𝐭𝐮𝐝𝐢𝐨) 𝐡𝐞𝐚𝐝𝐩𝐡𝐨𝐧𝐞𝐬, 𝐜𝐥𝐢𝐜𝐤 𝐡𝐞𝐫𝐞: ► ► ► ► https://amzn.to/2GhkjFJ ◄ ◄

𝐒𝐚𝐯𝐞 𝐮𝐩 𝐭𝐨 𝟖𝟎% 𝐨𝐟𝐟 𝐨𝐧 𝐞𝐥𝐞𝐜𝐭𝐫𝐨𝐧𝐢𝐜𝐬, 𝐜𝐨𝐦𝐩𝐮𝐭𝐞𝐫𝐬, 𝐡𝐞𝐚𝐝𝐩𝐡𝐨𝐧𝐞𝐬 𝐚𝐧𝐝 𝐌𝐎𝐑𝐄 𝐛𝐲 𝐛𝐫𝐨𝐰𝐬𝐢𝐧𝐠 𝐀𝐦𝐚𝐳𝐨𝐧'𝐬 𝐝𝐚𝐢𝐥𝐲 𝐝𝐞𝐚𝐥𝐬! 𝐂𝐥𝐢𝐜𝐤 𝐡𝐞𝐫𝐞: https://amzn.to/2IsLkr0

Equipment:
Camera (Canon EOS Rebel T6): https://amzn.to/2Id1sxJ
Speakers (Bose SoundLink Color Bluetooth speaker II): https://amzn.to/2D7BYxo
Headphones (beyerdynamic DT 990 PRO Over-Ear Studio Headphones): https://amzn.to/2GhkjFJ
Editing Software (Sony Vegas 15): https://amzn.to/2DlmV3x
Monitor (ASUS VG248QE 24" Full HD 1920x1080 144Hz): https://amzn.to/2IeA02v
Mouse (Logitech G502): https://amzn.to/2D7C73U

* The above are affiliate links. This channel participates in the Amazon Affiliate program.

Please consider leaving a like and subscribing if you enjoyed the content. It helps tremendously, thank you!

Content by: VPRO

Source: https://openbeelden.nl/media/1000986/Yoshua_Bengio_on_intelligent_machines.en

This content is licensed under Creative Commons. Please visit: https://openbeelden.nl/media/1000986/Yoshua_Bengio_on_intelligent_machines.en to see licensing information and check https://creativecommons.org/licenses/ for more information about the respective license(s).

Publication Date: 1 January 1960

Description: Canadian computer scientist Yoshua Bengio on artificial intelligence and how we can create thinking and learning machines through algorithms.

Contributor Information: Joshua Bengio

On Wednesday December 6th, two teams of UTS academics and industry partners gathered at UTS for the hotly anticipated “Humans, Data, AI & Ethics – Great Debate”. The rhetorical battle raised the provocative proposition that:

“Humans have blown it: it’s time to turn the planet over to the machines”
The debate was preceded by our daytime Conversation which featured engaging panel discussions and Lightning Talks from UTS academics and partners in government and industry.

The debate took place in front of a large audience of colleagues and members of the public on the UTS Broadway campus. The Affirmative team (The Machines) argued that a productive relationship between humans and machines will help us to build a fairer, more efficient and more ecologically sustainable global society. Numerous examples of humanity’s gross dysfunction in governance and management were raised, from human-induced climate change to widening inequality and the recent election of unpredictable populist leaders. The team argued that finely (and ethically) tuned machines will help humans to solve these immense social and environmental challenges and maintain standards of equality, fairness and sustainability.

The Negative team (The Humans) cautioned against the rapid adoption of these hypothetical “ethical machines”, raising concerns about existing human prejudices and biases being built into AI. The team envisaged a dystopian world in which machines deny the possibility of human creativity, error or “happy accidents”, which have lead to so many important moments of discovery throughout history. According to the Negative, there are also numerous social services which as yet cannot be performed by AI. Healthcare provision for example, strongly depends on complex emotional intelligence, human tact and an ability to empathise and build rapport.

Ultimately, the Negative were adjudicated as the winner of the debate, to the relief of humanists and ethicists in attendance. The theatrical and good-humoured event was a rousing success, giving leading thinkers in the data science field an opportunity to flesh out challenging ideas surrounding data, AI, society and ethics in a responsive public forum.

https://utscic.edu.au/events/humans-data-ai-ethics-great-debate/

The human vs. machine narrative is broken. Narcissistic advances in machine learning clash with what cognitive neuroscientists are revealing to be newly found intrepid capabilities of our brains. Hear how humanity will prevail in the times of exponential digitalisation and how we shall become proto-humans able to solve the abstract problems of the future with neoteny approaches. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Drones, driverless cars, robots that look and think like human… May intelligent systems really pose a threat to humans one day? Mateja addresses this question from a scientific perspective and talks about how the enormous amounts of data we have today can possibly allow machines to think indistinguishably from a human. She argues her faith in artificial intelligence as a supportive and effective system in the hands of human experts.

Mateja is a Senior Lecturer at the University of Cambridge Computer Laboratory. Having previously held an EPSRC Advanced Research Fellowship, Mateja is definitely disrupting that typical gender stereotype of a scientist in artificial intelligence. As a founder of women@CL she reminds all how important is to celebrate the gender diversity and help women aspire to leadership positions in both academia and in.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

George Yang is the founder and CEO of AI Pros, a tech start up which operates in Silicon Valley, California and Manila. Surrounded by technology and discussions of how Artificial Intelligence (A.I.) can replace Human Intelligence (H.I.), George Yang introduces the idea of Augmented Intelligence - how A.I. and H.I. can add value to the other and create something better entirely. George Yang is the founder and CEO of AI Pros, a tech start up which operates in Silicon Valley, California and Manila. Surrounded by technology and discussions of how Artificial Intelligence (A.I.) can replace Human Intelligence (H.I.), George Yang introduces the idea of Augmented Intelligence - how A.I. and H.I. can add value to the other and create something better entirely. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

In the last decade we have witnessed a tremendous development: the rise of the machines. The world is drenched in algorithms and they actively influence our world view. Are we ready? Marcel Blattner sheds light on why education is key for a healthy relationship between Artificial Intelligence and Humans.

Marcel Blattner is a data enthusiast and currently developing all kinds of machine learning algorithms for data voodoo at Tamedia. Before he started to explore the ‘data universe’ he spent several years as a researcher in academia where he got his PhD in theoretical physics. He speaks frequently at conferences and gives lectures in Artificial Intelligence.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

In this informative talk, Prof. Dan Siciliano explains how AI is hacking humans and offers practical ways to understand the system.

F. Daniel Siciliano is a Co-Director of Stanford’s Directors’ College, Co-Chair of the We Robot Conference on AI/Robotics, Law, and Policy, and is the immediate past faculty director of the Rock Center for Corporate Governance at Stanford University. Along with Joe Grundfest, Larry Kramer and Rob Daines, he co-founded the Rock Center in 2006 and, as a Professor of the Practice and Associate Dean at Stanford Law School, led the Center until 2017. His teaching includes finance, corporate governance, and the two-part Stanford venture capital series. His work has included expert testimony in front of both the U.S. Senate and the House of Representatives and for 2009, 2010 and 2011, alongside leading academics and business leaders such as Ben Bernanke, Paul Krugman and Carl Icahn, Professor Siciliano was named to the “Directorship 100”—a list of the most influential people in corporate governance.

Siciliano was also co-founder, CEO and ultimately Executive Chairman of LawLogix Group, Inc.—a global software technology company named nine consecutive times to the Inc. 500/5000, several times ranked as one of the Top 100 fastest growing private software companies in the U.S., and named to the U.S. Hispanic Business 500 (largest) and Hispanic Business 100 (fastest-growing) lists for 2010 and 2011. In 2012 he sold a majority stake of the company to PNC Riverarch Capital, continued as Executive Chairman, and led the sale of the company to Hyland Software/Thoma Bravo in 2015.

Siciliano is a co-founder and board member of the Silicon Valley Directors’ Exchange (SVDX), Chairman of the national non-partisan American Immigration Council, past-President of the League of United Latin American Citizens (LULAC) Council #1057, and an active member of the Latino Corporate Directors’ Association.

Professor Siciliano’s related areas of expertise include executive compensation, corporate compliance, the legal and social impact of autonomous (AI/robotic) systems, and corporate technology strategy and security. He has served as a governance consultant and trainer to the Board of Directors of dozens of Fortune 1000 companies (including Google, Microsoft, Fedex, Disney, Intrexon, Entergy and Applied Materials), is an angel investor and consultant to several firms and companies in Silicon Valley, Hong Kong, India and Latin America, and currently serves as an independent director on the board of the Federal Home Loan Bank of San Francisco. He lives in Los Altos, California.

For more information about TEDxPaloAlto please visit http://www.tedxpaloalto.com.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

A new technology called Artificial Swarm Intelligence could be our best defense against the emerging dangers of AI says Louis Rosenberg. http://bit.ly/2BDTHcs

For the TED website click here: https://www.ted.com/