Darren Hendler and Doug Roble from the Digital Human Group at Digital Domain share how they are making real-time photo-realistic digital humans possible using machine learning, Unreal Engine and NVIDIA RTX GPUs. Learn more: https://blogs.nvidia.com/blog/2019/10/10/machine-learning-digital-humans/

Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast.

INFO:
Podcast website:
https://lexfridman.com/ai
iTunes:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/category/ai/feed/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 – Introduction
1:14 – Contact with Aliens in the 21st century
6:36 – Multiverse and Nirvana
9:46 – String Theory
11:07 – Einstein’s God
15:01 – Would aliens hurt us?
17:34 – What would aliens look like?
22:13 – Brain-machine interfaces
27:35 – Existential risk from AI
30:22 – Digital immortality
34:02 – Biological immortality
37:42 – Does mortality give meaning?
43:42 – String theory
47:16 – Universe as a computer and a simulation
53:16 – First human on Mars

CONNECT:
– Subscribe to this YouTube channel
– Twitter: https://twitter.com/lexfridman
– LinkedIn: https://www.linkedin.com/in/lexfridman
– Facebook: https://www.facebook.com/lexfridman
– Instagram: https://www.instagram.com/lexfridman
– Medium: https://medium.com/@lexfridman
– Support on Patreon: https://www.patreon.com/lexfridman

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2020/04/27/94-stuart-russell-on-making-artificial-intelligence-compatible-with-humans/

Mindscape Podcast playlist: https://www.youtube.com/playlist?list=PLrxfgDEc2NxY_fRExpDXr87tzRbPCaA5x

Patreon: https://www.patreon.com/seanmcarroll

#podcast #ideas #science #philosophy #culture #ai #artificialintellgence

Artificial intelligence has made great strides of late, in areas as diverse as playing Go and recognizing pictures of dogs. We still seem to be a ways away from AI that is intelligent in the human sense, but it might not be too long before we have to start thinking seriously about the “motivations” and “purposes” of artificial agents. Stuart Russell is a longtime expert in AI, and he takes extremely seriously the worry that these motivations and purposes may be dramatically at odds with our own. In his book Human Compatible, Russell suggests that the secret is to give up on building our own goals into computers, and rather programming them to figure out our goals by actually observing how humans behave.

Stuart Russell received his Ph.D. in computer science from Stanford University. He is currently a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at the University of California, Berkeley, as well as an Honorary Fellow of Wadham College, Oxford. He is a co-founder of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. He is the author of several books, including (with Peter Norvig) the classic text Artificial Intelligence: A Modern Approach. Among his numerous awards are the IJCAI Computers and Thought Award, the Blaise Pascal Chair in Paris, and the World Technology Award. His new book is Human Compatible: Artificial Intelligence and the Problem of Control.

Are machines getting smarter than us? With greater computational power and ever more sophisticated artificial intelligence comes a whole new way of understanding the world. Now that our machines are capable of analysing data from so many different sources, taking into consideration countless variables and possibilities, it is becoming increasingly difficult for humans to understand how they come to their conclusions. How will we cope when machines understand the world better than us, and make better decisions than we can?
We’ve always been engaged in the pursuit of knowledge, believing that studying the world will lead to us a better understanding of how it works, so will we be happy to not ‘know’? This signifies a change in our basic model of the future, representing a shift away from simple cause-and-effect models to larger, more complex models.
As the guest in this Live Show puts it, “The world didn’t happen to be designed, by God or by coincidence, to be knowable by human brains. The nature of the world is closer to the way our network of computers and sensors represent it than how the human mind perceives it.” Join us to hear more from David Weinberger about how the nature of knowledge has shifted, and what we should do about it.

Max Tegmark @ Norrsken Foundation, Stockholm sharing his take on life, ethics and the potential of artificial intelligence in breaking the shackles of evolution in Leaps Talk #3 ‘Hacking Humans’.

In our 3rd Leaps Talk ‘Hacking Humans’ Jamie Metzl, Max Tegmark, and Elaine Weidman-Grunewald explored the ethical implications and state of evolution of gene-editing and AI. In this fascinating discussion on gene editing Tegmark poses the philosophical questions “What do we want to be in the future? What do we want it to mean to be human?”

Tune into the full Leaps Talk #3 here: https://youtu.be/GJ53YSDFLF4

Speakers:
Max Tegmark – Physicist, Cosmologist & author of Life 3.0: Being Human in the Age of Artificial Intelligence
Jamie Metzl – Technology Futurist & author of Hacking Darwin: Genetic Engineering and the Future of Humanity

Moderator:
Elaine Weidman-Grunewald – Co-founder, AI Sustainability Center

About Norrsken Foundation:
Norrsken is a non-religious, non-partisan, non-profit Foundation with a strong belief in Effective Altruism. Their aim is a world optimised for people and planet. Norrsken Foundation and Leaps by Bayer both strongly believe that fostering innovations with impact and a greater purpose for humanity will ultimately lead to returns – both socially and financially. The Norrsken House is a coworking space for startups striving to contribute to the greater good and is financed by the Norrsken Founders Fund.

About Leaps Talks:
Leaps Talks bring together leading minds from distinct disciplines to discuss the ethics, opportunities, and challenges in biotech innovation. Breakthroughs in biotech could change the world for better yet raise important ethical questions that can’t be answered by scientists and investors alone. Leaps by Bayer believes it’s critical to engage society in this dialogue and works with partners worldwide to stimulate these discussions.

About us:
Leaps by Bayer invests in paradigm-shifting advances in the life sciences – targeting the breakthroughs that could fundamentally change the world for the better. Our approach is making significant and sustained investments in disruptive biotechnologies that could have the greatest impact on humanity.

Connect with us:
https://leaps.bayer.com/
https://twitter.com/leapsbybayer
https://linkedin.com/company/leapsbybayer

Artificial intelligence is a double-edged sword, so how do we benefit from the upside without having to dealing with the downside? Max Tegmark in Stockholm at Leaps Talk #3 on why the conversations surrounding artificial intelligence must be opened up as it increases in power and capacity across multiple industries.

‘What do we want to be in the future? And what do we want it to mean to be human?’ – Max Tegmark on gene-editing at Leaps Talk #3 in Stockholm.

Tune into the full Leaps Talk #3 here – https://youtu.be/GJ53YSDFLF4

Jamie Metzl and Max Tegmark share their ideas on how biotech can accelerate safely and ethically, moderated by Elaine Weidman-Grunewald.

A collaboration with Norrsken Foundation.

Tune into the full Leaps Talk #3 here: LINK

Speakers:
Jamie Metzl – Technology Futurist & author of Hacking Darwin: Genetic Engineering and the Future of Humanity
Max Tegmark – Physicist, Cosmologist & author of Life 3.0: Being Human in the Age of Artificial Intelligence
Moderator:
Elaine Weidman-Grunewald – Co-founder, AI Sustainability Center

About Norrsken Foundation:
Norrsken is a non-religious, non-partisan, non-profit Foundation with a strong belief in Effective Altruism. Their aim is a world optimised for people and planet. Norrsken Foundation and Leaps by Bayer both strongly believe that fostering innovations with impact and a greater purpose for humanity will ultimately lead to returns – both socially and financially. The Norrsken House is a coworking space for startups striving to contribute to the greater good and is financed by the Norrsken Founders Fund.

About Leaps Talks:
Leaps Talks bring together leading minds from distinct disciplines to discuss the ethics, opportunities, and challenges in biotech innovation. Breakthroughs in biotech could change the world for better yet raise important ethical questions that can’t be answered by scientists and investors alone. Leaps by Bayer believes it’s critical to engage society in this dialogue and works with partners worldwide to stimulate these discussions.

About us:
Leaps by Bayer invests in paradigm-shifting advances in the life sciences – targeting the breakthroughs that could fundamentally change the world for the better. Our approach is making significant and sustained investments in disruptive biotechnologies that could have the greatest impact on humanity.

Connect with us:
https://leaps.bayer.com/
https://twitter.com/leapsbybayer
https://de.linkedin.com/company/leapsbybayer

My Instagram: @a.salehin.t
This Channel is about my journey to becoming a successful entrepreneur. I will be documenting all the big highlights of my life that help me to get closer to my goals and I will try to bring as much value to you guys as I can through my videos. These are the People that inspired me to start my journey and start taking small steps towards my dreams.

Instagram: @garyvee @babin @mattdavella @thaddy @tannerplanes @sebb @bowles @tombilyeu @yestheory @seancannell @mike_thesaiyan @sam_kolder @daniels_laizans and more.

I really hope you find your Inspirations somehow and start chasing your dreams ASAP and take is 1 thing in and fully internalize it … “AT THE BEGINNING NO ONE WILL SUPPORT YOU AND EVERYONE WILL PUT YOU DOWN SUBCONSCIOUSLY OR CONSCIOUSLY BUT IF YOU CAN WALL THINGS WILL GET REMARKABLE”
I have only respect and empathy for the people that started taking baby steps towards their dreams and no one is with them.

Jamie Metzl, Max Tegmark in ‘Hacking Humans: A unique conversation about gene-editing and AI as part of Leaps Talk #3 at the renowned Norrsken House in Stockholm. Moderated by Elaine Grunewald.

How to bring forward our ethics and values to meet the level of advancement in technology and biosciences today? Jamie Metzl and Max Tegmark come together to share their ideas on how the rapid development of gene-editing and artificial intelligence needs to be guided by standards, ethics, and shared values, in order to achieve breakthroughs safely. Moderated by Elaine Weidman-Grunewald.

A collaboration with Norrsken Foundation.

Speakers:
Jamie Metzl – Technology Futurist & author of Hacking Darwin: Genetic Engineering and the Future of Humanity
Jamie Metzl is one of the world’s leading futurists who specializes in making revolutionary science and its implications understandable to and actionable for individuals, companies, and governments. A frequent CNN commentator, he speaks regularly at some of the world’s most prestigious venues including SXSW, Google Zeitgeist, and Singularity University.

Max Tegmark – Physicist, Cosmologist & author of Life 3.0: Being Human in the Age of Artificial Intelligence
Max (Mad Max) Tegmark is a Professor at MIT. He’s been called ‘the smartest Swede in the world’ and is at the heart of a spider’s web of brilliant minds who are trying to make sure that humanity will not be overrun by artificial intelligence (AI). Two years ago Elon Musk ploughed $10 million into his AI research network, the Future of Life Institute.

Moderator:
Elaine Weidman-Grunewald – Co-founder, AI Sustainability Center
Co-Founder of AI Sustainability Center, a multi-disciplinary center for responsible and purpose-driven technology, based on Nordic values. She is an expert on global sustainability and development, with which she has worked for over two decades in the private sector. Focusing on digitalization and sustainable development challenges, she pioneered the concept of Technology for Good.

About Norrsken Foundation:
Norrsken is a non-religious, non-partisan, non-profit Foundation with a strong belief in Effective Altruism. Their aim is a world optimised for people and planet. Norrsken Foundation and Leaps by Bayer both strongly believe that fostering innovations with impact and a greater purpose for humanity will ultimately lead to returns – both socially and financially. The Norrsken House is a coworking space for startups striving to contribute to the greater good and is financed by the Norrsken Founders Fund.

About Leaps Talks:
Leaps Talks bring together leading minds from distinct disciplines to discuss the ethics, opportunities, and challenges in biotech innovation. Breakthroughs in biotech could change the world for better yet raise important ethical questions that can’t be answered by scientists and investors alone. Leaps by Bayer believes it’s critical to engage society in this dialogue and works with partners worldwide to stimulate these discussions.

About us:
Leaps by Bayer invests in paradigm-shifting advances in the life sciences – targeting the breakthroughs that could fundamentally change the world for the better. Our approach is making significant and sustained investments in disruptive biotechnologies that could have the greatest impact on humanity.

Connect with us:
https://leaps.bayer.com/
https://twitter.com/leapsbybayer
https://linkedin.com/company/leapsbybayer

MIT professor emeritus and Rethink Robotics’ founder Rodney Brooks, Carnegie Mellon’s Abhinav Gupta, and MIT’s Andrew McAfee, join Nicholas Thompson, editor at NewYorker.com, to discuss artificial intelligence (AI) and robot technology, and their economic impact on industry and society in the future.

The Malcolm and Carolyn Wiener Annual Lecture on Science and Technology addresses issues at the intersection of science, technology, and foreign policy.
Speakers:
Rodney Brooks, Panasonic Professor of Robotics (Emeritus), Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology; Founder, Chairman, and Chief Technology Officer, Rethink Robotics
Abhinav Gupta, Assistant Research Professor, Robotics Institute, Carnegie Mellon University
Andrew McAfee, Principal Research Scientist and Cofounder, Initiative on the Digital Economy, Sloan School of Management, Massachusetts Institute of Technology

Presider:
Nicholas Thompson, Editor, “NewYorker.com”

New videos DAILY: https://bigth.ink

Join Big Think Edge for exclusive video lessons from top thinkers and doers: https://bigth.ink/Edge

———————————————————————————-

Richard Dawkins has made a career out of hypothesizing and articulating ideas that move the world forward, insomuch as many of those ideas could be called “ahead of their time.” Having said that, he tells us here that we might be living in the dawn of not just artificial intelligence but of a silicon civilization that will look back on this time period as the dawn of their kind. They could one day, Dawkins suggests, study us the same way that we studied other beings that once ruled the earth. Sound crazy? Open your mind and think about it. Dawkins isn’t that far off from a potential actuality on this planet. Richard Dawkins’ new book is Science in the Soul: Selected Writings of a Passionate Rationalist.

———————————————————————————-

RICHARD DAWKINS

Richard Dawkins is an evolutionary biologist and the former Charles Simonyi Professor of the Public Understanding of Science at Oxford University. He is the author of several of modern science’s essential texts, including The Selfish Gene (1976) and The God Delusion (2006). Born in Nairobi, Kenya, Dawkins eventually graduated with a degree in zoology from Balliol College, Oxford, and then earned a masters degree and the doctorate from Oxford University. He has recently left his teaching duties to write and manage his foundation, The Richard Dawkins Foundation for Reason and Science, full-time. 

———————————————————————————-

FOLLOW BIG THINK:

📰BigThink.com: https://bigth.ink
🧔Facebook: https://bigth.ink/facebook
🐦Twitter: https://bigth.ink/twitter
📸Instagram: https://bigth.ink/Instragram
📹YouTube: https://bigth.ink/youtube
✉ E-mail: info@bigthink.com

———————————————————————————-

TRANSCRIPT:

Richard Dawkins: When we come to artificial intelligence and the possibility of their becoming conscious we reach a profound philosophical difficulty. I am a philosophical naturalist. I am committed to the view that there’s nothing in our brains that violates the laws of physics, there’s nothing that could not in principle be reproduced in technology. It hasn’t been done yet, we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human made robot is capable of consciousness and of feeling pain. We can feel pain, why shouldn’t they? 

And this is profoundly disturbing because it kind of goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so this moral consideration of how to treat artificially intelligent robots will arise in the future, and it’s a problem which philosophers and moral philosophers are already talking about.

Once again, I’m committed to the view that this is possible. I’m committed to the view that anything that a human brain can do can be replicated in silicon. 

And so I’m sympathetic to the misgivings that have been expressed by highly respected figures like Elon Musk and Steven Hawking that we ought to be worried that on the precautionary principle we should worry about a takeover perhaps even by robots by our own creation, especially if they reproduce themselves and potentially even evolve by reproduction and don’t need us anymore. 

This is a science-fiction speculation at the moment, but I think philosophically I’m committed to the view that it is possible, and like any major advance we need to apply the precautionary principle and ask ourselves what the consequences might be. 

It could be said that the sum of not human happiness but the sum of sentient-being happiness might be improved, they might make a better job do a better job of running the world than we are, certainly that we are at present, and so perhaps it might not be a bad thing if we went extinct. 

And our civilization, the memory of Shakespeare and Beethoven and Michelangelo persisted in silicon rather than in brains and our form of life. And one could foresee a future time when silicon beings look back on a dawn age when the earth was peopled by soft squishy watery organic beings and who knows that might be better, but we’re really in the science fiction territory now.

https://speakersconnection.com/ – Will Artificial Intelligence destroy the human race? Keynote Speaker Inma Martinez separates the fact from fiction at TEDx Ghent.

ABOUT KEYNOTE SPEAKER INMA MARTINEZ

One of the original “Children of the Internet,” Inma Martinez has been at the forefront of the digital and mobile revolution since the early 2000s. Fortune and TIME have described her as one of Europe’s top talents in social engagement through technology, Red Herring ranked her among the “top 40 women in technology”and FastCompany labelled her a “Firestarter.” Inma was listed at #1 among “The Top 10 Women Changing The Landscape Of Data In 2018” by Enterprise Management 360 and one of the top “50 A.I. Influencers To Follow On Twitter” by Cognilytica, an A.I. and Cognitive Sciences Research agency.

Inma is one of Europe’s most sought-out advisors on technological trends, product innovation strategy, and thriving in an environment driven by empowered consumers. An inspirational keynote speaker, she delivers on the effects of A.I. and Automation, Man-Machine Collaborations and the accelerated digitalisation of life, work and play. An industry recognised Artificial Intelligence and digital visionary that pioneered 1:1 personalisation in the early days of the mobile Internet, Inma has been credited to be part of the original development teams behind Wireless Access Protocol (WAP), mobile music and video streaming, wearable technologies, “widgets” – the precursors to mobile apps, and the “connected car” and smart cities development.

Both governments and private corporations seek her advisory on how to address the digital challenges. Inma is currently an advisor to the board at the All Parliamentary Party Group on A.I. at the House of Lords, and has provided evidence at the EU Commission on A.I. and the misuse of citizen data in light of new GDPR policies. In parallel, she continues to commit her time to education at Imperial College London where she is a guest lecturer at the MSc Management program, the MSc in Economics and the MSc Innovation, Entrepreneurship and Strategy, and to the mentorship of innovative technologies at Deep Science Ventures, a venture-focused science institute where audacious entrepreneurial scientists explore solutions to world’s challenges.

To hire Inma Martinez at your next conference or event, contact us:
https://speakersconnection.com/contact/

ABOUT SPEAKERS CONNECTION

At Speakers Connection we connect people to people. People who want to experience new things, share ideas, and value the importance of the human connection.

We manage a curated group of speakers. Our goal is to create opportunities that allow our speakers to connect with event professionals and showcase their talents to audiences worldwide. We provide strategies, insights and direction that allow the speaker to develop new marketing channels.

Our longevity in the industry allows us to cultivate strong relationships with corporations, associations, event management companies, global special interest groups, production companies, speakers’ bureaus, educational institutions, and philanthropic organizations resulting in exceptional opportunities for our speakers.

To learn more about Speakers Connection please visit:
https://speakersconnection.com/

Keynote by Suchi Saria

Subscribe to O’Reilly on YouTube: http://goo.gl/n3QSYi

Follow O’Reilly on
Twitter: http://twitter.com/oreillymedia
Facebook: http://facebook.com/OReilly
Google: http://plus.google.com/+oreillymedia

Longtime Humanist Community member, and Board member, Marc Perkel will discuss his following beliefs and questions: If humanity ever invents Artificial Intelligence that is smarter than we are, it will be the last thing we’ll ever invent. That’s because the AI will do the inventing far faster than we ever could. This raises a lot of philosophical questions. When will AI be smarter than us? Sooner than you think!

What will it be like to not be the smartest species on the planet? Will the robots kill us off? What values will we teach AI to get it started? Why should humanity continue to exist once we create a superior species. Will we be able to pull the plug on it – or will it be able to pull the plug on us? Is Humanism limited to just humans? Do we need Religion for Robots? Shouldn’t we answer these questions BEFORE we create the AI?

從1956年第一次訂立人工智慧(Artificial Intelligence)這個名詞,到2016年圍棋對弈一戰成名的AlphaGo,「人工智慧到底會不會取代人類」一直是各方焦慮的質疑,而隨著機器學習與深度學習的發展,人工智慧快速精準的學習資料庫內的模型,不管是簡單的圖像辨識,或是複雜的醫學影像,都能夠做到比人類專家更精準的判讀。
身為一位人工智慧研究學者,許永真提出”AI is to empower people.” 人工智慧應是人類的助力,能夠縮短高重複性勞務時間並降低錯誤率,是協助人類解決複雜問題的一項技術。
我們不需要害怕機器取代人類,而是學習與機器合作,成為懂得善用人工智慧的人才。
—–
Will machines with artificial intelligence replace humans? This question has been the topic of discussion ever since AlphaGo defeated one of the world’s best Go players in 2016. AI researcher Jane Hsu argues that machine intelligence is not something to be feared; instead, we should embrace life with artificial intelligence as it is designed to empower people. Here, she gives a clear, easy-to-understand view of how machines that process information on a very sophisticated level will benefit humans in the near future. 臺大資訊工程學系教授。曾擔任台灣人工智慧協會的理事長與臺大資訊系系主任,其研究與教學主要著重於智慧型多代理人系統、資料探勘分析、以及感知運算。

目前擔任 Intel-NTU 中心的主任,協助促進台大、Intel與台灣國家科學委員會間的國際研究合作;也正於台大資訊系開授人工智慧的相關課程,並在創新設計學院開授「智齡設計-老人科技福祉專題」,期望透過結合資訊科技與創新思考的方式,帶領學生發揮創新思考並有能力將其付諸實現。

—–

Jane Hsu is currently a Professor of the Department of Computer Science and Information Engineering at National Taiwan University, where she served as the Department Chair from 2011 to 2014. As the Director of the NTU IoX Center, established in 2011 as the Intel-NTU Connected Context Computing Center, Prof. Hsu is leading the global research collaboration on Augmented Collective Beings and Internet of Things. With more than 30 years of experience in AI, her research interests include multi-agent planning/learning, crowd-sourcing, knowledge mining, commonsense computing, and context-aware smart IoT. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

While Artificial Intelligence has been around for decades we are witnessing a new generation of solutions. Augmented intelligence represents the most significant commercialization of AI/ML for business. Augmented intelligence is an approach that uses tools from artificial intelligence to perform well-defined tasks that support business decision- making.

In this webinar, Judith Hurwitz, president of Hurwitz & Associates and the co-author of the forthcoming book, Augmented Intelligence will discuss the importance of augmented intelligence as the technique to provide for collaboration between humans and machines.

The webinar will explain:

What augmented intelligence is and how it can benefit business decision makers
What do business leaders need to understand about AI and machine learning to avoid traps
What are the risks in terms of ethics, compliance, and governance
Judith Hurwitz, President & CEO, Hurwitz & Associates, a research and consulting firm focused on the business value of emerging technologies
Judith S. Hurwitz, President and CEO of Hurwitz & Associates, Inc., a firm focused on emerging technology in enterprise computing including Artificial Intelligence, machine learning, big data, cloud computing, and security. She is a technology strategist, thought leader, speaker, and author. A pioneer in anticipating technology innovation and adoption, she has served as a trusted advisor to many industry leaders over the years. She is the co-author of 10 books including Augmented Intelligence (Taylor & Francis Group, 2019), Cognitive Computing and Big Data Analytics (Wiley, 2015), and Cloud Computing for Dummies (Wiley, 2020). Judith holds a BS and MS degrees from Boston University. She is a board member of member of Boston University’s Alumni Council and the College of Arts & Sciences Dean’s Advisory board.

While Artificial Intelligence has been around for decades we are witnessing a new generation of solutions. Augmented intelligence represents the most significant commercialization of AI/ML for business. Augmented intelligence is an approach that uses tools from artificial intelligence to perform well-defined tasks that support business decision- making.

In this webinar, Judith Hurwitz, president of Hurwitz & Associates and the co-author of the forthcoming book, Augmented Intelligence will discuss the importance of augmented intelligence as the technique to provide for collaboration between humans and machines.

The webinar will explain:

What augmented intelligence is and how it can benefit business decision makers
What do business leaders need to understand about AI and machine learning to avoid traps
What are the risks in terms of ethics, compliance, and governance
Judith Hurwitz, President & CEO, Hurwitz & Associates, a research and consulting firm focused on the business value of emerging technologies
Judith S. Hurwitz, President and CEO of Hurwitz & Associates, Inc., a firm focused on emerging technology in enterprise computing including Artificial Intelligence, machine learning, big data, cloud computing, and security. She is a technology strategist, thought leader, speaker, and author. A pioneer in anticipating technology innovation and adoption, she has served as a trusted advisor to many industry leaders over the years. She is the co-author of 10 books including Augmented Intelligence (Taylor & Francis Group, 2019), Cognitive Computing and Big Data Analytics (Wiley, 2015), and Cloud Computing for Dummies (Wiley, 2020). Judith holds a BS and MS degrees from Boston University. She is a board member of member of Boston University’s Alumni Council and the College of Arts & Sciences Dean’s Advisory board.

Princeton University researchers Arvind Narayanan, Aylin Caliskan and Joanna Bryson discuss their research on how human biases seep into artificial intelligence.

The singularity has been upon us since the 19th Century (Feat. Brendan Bradley & The Defective Geeks).
Help us continue the steampunk fun at http://ProgressTheSeries2.com

Artificial Intelligence vs Humans – Jim disagrees with Stephen Hawking about the role Artificial Intelligence will play in our lives. Jim is an artificial intelligence .

IBM’s Watson supercomputer destroys all humans in Jeopardy. » Subscribe To Engadget Today: » Watch More Engadget Video Here: .

Although this excerpt is from pt. 3 of a 3 part interview, I highly recommend that you watch all 3 parts, in REVERSE order (3,2,1), by which I believe you’ll make .

Best insurance companies of the world,ITT Technical Institute of USA,Largest Bank Of Australia ,Largest Bank Of Canada ,Largest Banks OF UK,Largest Banks .

Ever imagined a world of robots and humans coexisting? Check out this video and find out when it might become a reality.

**REMEMBER TO SUBSCRIBE FOR MUCH MORE TO COME**
Script –
Humans are making computers stronger, faster and more powerful every day. Robotic technologies continue to advance at an ever increasing rate, as scientists push to create a robot that rivals the physical ability, intelligence and emotion of human beings. But will this technology ever be as good as us?
In order for artificial intelligence or AI to be possible, computing power must meet or exceed the memory and processing power of the human brain. The human brain is estimated to have the processing power of 10 quadrillion calculations per second and as of 2015, the Tianhe-2 supercomputer in China could perform over 33 quadrillion calculations per second.However, although this is very impressive, the human brain is far more complex in structure and function than the super computer. The human brain has a complex network of billions of neurons, that receive and send out neurotransmitters, signals, messages and instructions to the entire body every second of our lives. The Tianhe-2 costs about $390 million to build, at its peak it draws more than 17.6 megawatts of power and the computer complex covers about 2,300 square feet. This system is not only massive, but expensive and extremely power hungry. The development of a computer that will emulate a human mind, is more than likely decades away. But a computer that has lots of memory and processing power isn’t enough, for it to be intelligent, an AI needs to behave intelligently. Major efforts for example are underway to map the human brain, this potentially could result in the discovery of consciousness, which could then lead to the development of synthetic consciousness. Giving a computer, a state of awareness or quality, allowing it the ability to experience or to feel or having a state of individual identity.But as the computers become smarter, scientists are also working on building it a body. An AI would need some way to affect the world around it. Using advanced robotics could mean enhanced precision, balance, maneuver ability, agility, speed, strength, durability and much more. Most robots designed today or the robots we see depicted in the movies are mostly shaped like humans. This is because machines shaped like people are best suited to navigate a world built by mankind. A robot that looks like a human could theoretically climb ladders and stairs, step over obstacles in its path, even drive a car for example. However some scientists believe that if you started to design a being from scratch, you could make a much better version of our selves. There are no real advantages to building robots that look like humans, but one thing that seems to drive the creation of a humanoid type robot is that people care more about something when it looks similar to humans.Scientists have so far brought us the Asimo, the Honda developed space suit looking robot. Probably the most famous bot, it is a cheerful and endearing little thing with an innocent look, that can walk, run and perform basic tasks. Other robots have been made like the pole dancing double act, lady bots, that can be bought for $39,500 and the slightly scary suit testing robot Petman. The anthropomorphic robot designed for testing chemical protection clothing, with its realistic human movements and simulated human physiology such as controlling temperature, humidity and sweating. Interacting humanoid robots also exist, such as Actriod, it can function autonomously, talking and gesturing with people. It knows sign language, such as “point” or “swing”, that automatically adapts to the position of the speaker.However Current robots designed over the last few years to match human capability still need a lot of work, but they could become a day to day reality sooner than we think. Some are predicting that robots of all types could fully replace humans by 2045.
So, at some point in the future, we are all more than likely going to be existing in a world were humans and robots live side by side. But will that future be peacefully, or are we setting our selves up for some sort of Terminator style robocalpse.

Attributes –
Asimo Program Stream-Centar za promociju nauke
Binary Code – Videvo
Neuron-Oliver Konow
Petman Tests Camo – Boston Dynamics
Pole dancing robot – Caroline Hyde
Speaking Robot – THE AGE OF ROBOTS – Massimo Brega
Music –
Sci Fi Music – httpwww.bensound.com
Outro – Hurry Up -Kevin Macleod Incomptech
Licensed under Creative Commons: By Attribution 3.0
http://creativecommons.org/licenses/by/3.0/

The founder of Tesla Motors and Space X, Elon Musk, described artificial intelligence as our ‘biggest existential threat’ while speaking at MIT.

FB :- https://bit.ly/2J97Bcg
INSTA :- https://bit.ly/2IQnCVF
TWITTER :- @factomania2
If you have any problem with this video or your suggestion please provide us feed back .
in this video i used many clips and image and those i don’t have any rights . if any one have problem then you can email me i will remove this video
for copyright Issue & Inquiry Email :- factomania.info@gmail.com
Please give us Your Support, Subscribe and share the Channel with Your Friends and Relatives. support us by like this video, share it, give us your suggestion and subscribe to the channel.
Thank You

Artificial Intelligence vs Humans – Jim disagrees with Stephen Hawking about the role Artificial Intelligence will play in our lives.

Jim is an artificial intelligence researcher at Rensselaer Polytechnic Institute, and one of the originators of the Semantic Web.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

Taking chatbots to the next level, with emotion recognition and gesture control. Dr Michel Valstar on Virtual Humans.

EXTRA BITS: https://youtu.be/gRE30g7ACWs

https://www.facebook.com/computerphile
https://twitter.com/computer_phile

This video was filmed and edited by Sean Riley.

Computer Science at the University of Nottingham: https://bit.ly/nottscomputer

Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com

‘In the end’ is a long time period; it’s a very long time period. Who knows, by then? And you know, these guys who claim that we’ll see the singularity by 2030… Dude, I don’t believe that at all, by any means, shape or form. Will we see smart machines being able to do smarter things with data? Sure. I think there are all kinds of great opportunities there. But in terms of over the next 100 years, are machines going to be smarter than humans just because some IBM computer can beat humans at – I don’t even know – Jeopardy? Nah. I don’t find that that’s interesting, actually. I think doing smart things with data, doing a lot of analysis and so on… But you know, these are very limited sort of things. Even if you take something that is starting to get people excited using Siri on your iPhone – it feels pretty amazing first. When I tell Siri to book a table at Harvest on Friday at noon, when I can do that, I go like “Wow, that’s pretty cool.” But computers outsmarting humans? No, not for a long time.

3 Meaningful Minutes: Episode #1. Will Robots Ever Be As Smart As Humans? This video explores some things to consider on the subject.

———
DOWNLOAD FREE MUSIC from ShatterRed:
http://shatterredmusic.com/free-music/
———
GET AN EMAIL ALERT WHEN THERE’S A NEW #3MM VIDEO.
(link coming soon)
———

**********
SUBSCRIBE to “3 Meaningful Minutes! 🙂
https://www.youtube.com/channel/UCsptmqqoe7dBI3n3rlLWIZg?sub_confirmation=1
**********

Get “Let In The Love,” the song featured on today’s video, on iTunes:
https://itunes.apple.com/us/album/scarlet-rain/id569753954

———
Previous 3MM Episode:
(There is none, this is the first episode!)
———
NEXT 3MM Episode:
https://www.youtube.com/watch?v=Ll6wLvpODvw
———

———
Main YouTube Channel:
http://youtube.com/ShatterRedmusic
———
LIKE us on FACEBOOK!
http://facebook.com/ShatterRed
———
FOLLOW us on TWITTER!
http://twitter.com/shatterredmusic
———
FOLLOW us on INSTAGRAM:
http://instagram.com/shatterredmusic
—————-
Our Website:
http://ShatterRed.com
——————
Our Christian Music Industry Blog:
http://christianmusicindustry.com
———–

Here are some articles for further reading about today’s topic:

Why robots will not be smarter than humans by 2029

http://science.howstuffworks.com/robot-computer-conscious2.htm

More about this episode:
It’s a common fear in that finds its way into a lot of science fiction: Will robots/computers become smarter than humans and overpower us? We have devices like Siri and the Amazon Echo which can speak and interact with us. We have films and TV shows like I, Robot; Person of Interest; Revolution; and Terminator that show us what can happen when the cyber world becomes too powerful. Is it inevitable?

There may actually be hope. Consciousness is a very complex thing.

Debate in the comments!

If you’re interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner Executive Interviews: https://www.executiveinterviews.biz/rightsholders/bigthink/

If you’re interested in licensing this or any other Big Think clip for commercial or private use, contact our licensing partner Executive Interviews: https://www.executiveinterviews.biz/contact-us/americas/

Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be outsmarted first by fairly dumb AI, says Eric Weinstein. Humans rarely create products with a reproductive system—you never have to worry about waking up one morning to see that your car has spawned a new car on the driveway (and if it did: cha-ching!), but artificial intelligence has the capability to respond to selective pressures, to self-replicate and spawn daughter programs that we may not easily be able to terminate. Furthermore, there are examples in nature of organisms without brains parasitizing more complex and intelligent organisms, like the mirror orchid. Rather than spend its energy producing costly nectar as a lure, it merely fools the bee into mating with its lower petal through pattern imitation: this orchid hijacks the bee’s brain to meet its own agenda. Weinstein believes all the elements necessary for AI programs to parasitize humans and have us serve its needs already exists, and although it may be a “crazy-sounding future problem which no humans have ever encountered,” Weinstein thinks it would be wise to devote energy to these possibilities that are not as often in the limelight.

Read more at BigThink.com: http://bigthink.com/videos/eric-weinstein-how-even-dumb-ai-could-outsmart-humans

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

Transcript: There are a bunch of questions next to or adjacent to general artificial intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mindshare. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there’ll be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.

So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system.So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.

Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.

Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.

AI pioneer and co-founder and chief scientist of Artificial Intelligence startup NNAISENSE Jurgen Schmidhuber recently stated that while machines will eventually be smarter than humans, there is no reason why the emerging technology should be feared.

Jurgen Schmidhuber has been involved in the AI field since the 1970s. In 1997, Schmidhuber helped publish a study on Long Short Term Memory, one of the concepts that ultimately became the roots of AI memory functions. Speaking during the Global Machine Intelligence Summit (GMIS) last year, the AI pioneer stated that he had big dreams for the technology since he first began studying the field. According to Schmidhuber, he wanted to build machines that can teach themselves.

The AI pioneer carried over his vision for advanced AI well into the present day. In a recent statement to CNBC News, Schmidhuber noted that eventually, machines will likely surpass humans in terms of intelligence.

“I’ve been working on AI for several decades, since the eighties basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire,” he said.

Unlike other tech leaders such as Elon Musk and the late Stephen Hawking, Schmidhuber has adopted a more optimistic outlook on AI. Musk, for one, has frequently mentioned the dangers of hyper-intelligent computer systems, to the point of stating that AI could be more dangerous than nuclear warheads.

Schmidhuber, however, disagrees, stating that once AI surpasses humans’ intelligence, machines would likely just lose interest. The AI pioneer added that he and Musk had already spoken about the matter.

“I’ve talked to him for hours, and I’ve tried to allay his fears on that, pointing out that even once AIs are smarter than we are, at some point they are just going to lose interest in humans,” he said.

Schmidhuber believes that there are still concerns about the emergence of hyper-advanced computer systems, however. According to the AI pioneer, the real dangers of artificial intelligence lie not on machines, but on people themselves.

“If there are any concerns, it’s that humans should be worried about beings that are similar to yourself and share goals. Cooperation could result, or it could go to an extreme form of competition, which would be war,” he said.

Nevertheless, considering the pace and direction of AI research today, Schmidhuber remains optimistic. While the pioneer admitted that a portion of AI research is dedicated to making intelligent weapons, the vast majority of studies in the artificial intelligence field is geared towards helping people.

“About 95 percent of all AI research is about enhancing the human life by making humans live longer, healthier and happier,” he said.

In a lot of ways, Schmidhuber’s statements about human-friendly AI research and AI-based weapons rings true. While the Pentagon and countries like South Korea are exploring the concept of weaponized AI, several initiatives, i

Breaking the Wall between Human and Artificial Intelligence:

From the stuff of dystopian science fiction movies to everyday companions – with the rise of ubiquitous mobile computing power, artificial intelligence (AI) is already permeating modern life. As of 2017, deep learning algorithms power our phones’ voice-assistants, recommend the latest movies, and optimise our bike ride to work. AI has been heralded as the new electricity, soon to be found in almost every piece of technology we produce. To the man who has been described as “the father of modern AI”, this is merely the beginning. Although the artificial neural networks of Jürgen Schmidhuber’s team are now in 3 billion smartphones, he considers our current state of AI technology to be in the early stages of infancy. Whereas today’s seemingly smart algorithms are geared towards singular purposes – playing chess, matching love-hungry 30-somethings, or finding appropriate music for cooking – Jürgen’s goal has always been to create a general-purpose AI within his lifetime. His entire career has been dedicated to developing a software that would outsmart him, and though he readily admits that, as of now, the best general-purpose AI is only comparable to the intelligence of an infant animal, he is convinced that it will not be long before we develop systems that are far superior to us. At Falling Walls, Jürgen lays out the state of the art in his field of research and shares his vision of a future in which humans are no longer the crown of creation.

“What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.” Gaurav Sangtani, talked about how technology and artificial intelligence is changing the world. Around all fears how it can impact job markets and society at large and how can we adapt to it and move ahead with this change. Social Worker This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

𝐖𝐞 𝐇𝐈𝐆𝐇𝐋𝐘 𝐫𝐞𝐜𝐨𝐦𝐦𝐞𝐧𝐝 𝐰𝐚𝐭𝐜𝐡𝐢𝐧𝐠 𝐭𝐡𝐢𝐬 𝐯𝐢𝐝𝐞𝐨 𝐰𝐢𝐭𝐡 𝐠𝐨𝐨𝐝 (𝐛𝐞𝐲𝐞𝐫𝐝𝐲𝐧𝐚𝐦𝐢𝐜 𝐃𝐓 𝟗𝟗𝟎 𝐏𝐑𝐎 𝐎𝐯𝐞𝐫-𝐄𝐚𝐫 𝐒𝐭𝐮𝐝𝐢𝐨) 𝐡𝐞𝐚𝐝𝐩𝐡𝐨𝐧𝐞𝐬, 𝐜𝐥𝐢𝐜𝐤 𝐡𝐞𝐫𝐞: ► ► ► ► https://amzn.to/2GhkjFJ ◄ ◄

𝐒𝐚𝐯𝐞 𝐮𝐩 𝐭𝐨 𝟖𝟎% 𝐨𝐟𝐟 𝐨𝐧 𝐞𝐥𝐞𝐜𝐭𝐫𝐨𝐧𝐢𝐜𝐬, 𝐜𝐨𝐦𝐩𝐮𝐭𝐞𝐫𝐬, 𝐡𝐞𝐚𝐝𝐩𝐡𝐨𝐧𝐞𝐬 𝐚𝐧𝐝 𝐌𝐎𝐑𝐄 𝐛𝐲 𝐛𝐫𝐨𝐰𝐬𝐢𝐧𝐠 𝐀𝐦𝐚𝐳𝐨𝐧’𝐬 𝐝𝐚𝐢𝐥𝐲 𝐝𝐞𝐚𝐥𝐬! 𝐂𝐥𝐢𝐜𝐤 𝐡𝐞𝐫𝐞: https://amzn.to/2IsLkr0

Equipment:
Camera (Canon EOS Rebel T6): https://amzn.to/2Id1sxJ
Speakers (Bose SoundLink Color Bluetooth speaker II): https://amzn.to/2D7BYxo
Headphones (beyerdynamic DT 990 PRO Over-Ear Studio Headphones): https://amzn.to/2GhkjFJ
Editing Software (Sony Vegas 15): https://amzn.to/2DlmV3x
Monitor (ASUS VG248QE 24″ Full HD 1920×1080 144Hz): https://amzn.to/2IeA02v
Mouse (Logitech G502): https://amzn.to/2D7C73U

* The above are affiliate links. This channel participates in the Amazon Affiliate program.

Please consider leaving a like and subscribing if you enjoyed the content. It helps tremendously, thank you!

Content by: VPRO

Source: https://openbeelden.nl/media/1000986/Yoshua_Bengio_on_intelligent_machines.en

This content is licensed under Creative Commons. Please visit: https://openbeelden.nl/media/1000986/Yoshua_Bengio_on_intelligent_machines.en to see licensing information and check https://creativecommons.org/licenses/ for more information about the respective license(s).

Publication Date: 1 January 1960

Description: Canadian computer scientist Yoshua Bengio on artificial intelligence and how we can create thinking and learning machines through algorithms.

Contributor Information: Joshua Bengio

On Wednesday December 6th, two teams of UTS academics and industry partners gathered at UTS for the hotly anticipated “Humans, Data, AI & Ethics – Great Debate”. The rhetorical battle raised the provocative proposition that:

“Humans have blown it: it’s time to turn the planet over to the machines”
The debate was preceded by our daytime Conversation which featured engaging panel discussions and Lightning Talks from UTS academics and partners in government and industry.

The debate took place in front of a large audience of colleagues and members of the public on the UTS Broadway campus. The Affirmative team (The Machines) argued that a productive relationship between humans and machines will help us to build a fairer, more efficient and more ecologically sustainable global society. Numerous examples of humanity’s gross dysfunction in governance and management were raised, from human-induced climate change to widening inequality and the recent election of unpredictable populist leaders. The team argued that finely (and ethically) tuned machines will help humans to solve these immense social and environmental challenges and maintain standards of equality, fairness and sustainability.

The Negative team (The Humans) cautioned against the rapid adoption of these hypothetical “ethical machines”, raising concerns about existing human prejudices and biases being built into AI. The team envisaged a dystopian world in which machines deny the possibility of human creativity, error or “happy accidents”, which have lead to so many important moments of discovery throughout history. According to the Negative, there are also numerous social services which as yet cannot be performed by AI. Healthcare provision for example, strongly depends on complex emotional intelligence, human tact and an ability to empathise and build rapport.

Ultimately, the Negative were adjudicated as the winner of the debate, to the relief of humanists and ethicists in attendance. The theatrical and good-humoured event was a rousing success, giving leading thinkers in the data science field an opportunity to flesh out challenging ideas surrounding data, AI, society and ethics in a responsive public forum.

Humans, Data, AI & Ethics – The Great Debate

The human vs. machine narrative is broken. Narcissistic advances in machine learning clash with what cognitive neuroscientists are revealing to be newly found intrepid capabilities of our brains. Hear how humanity will prevail in the times of exponential digitalisation and how we shall become proto-humans able to solve the abstract problems of the future with neoteny approaches. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx