Professor John Lennox discusses his recent book “2084: Artificial Intelligence and the Future of Humanity”.

You don’t have to be a computer scientist to get involved in the discussion about where artificial intelligence and technology are going. What will the year 2084 hold for you–for your friends, for your family, and for our society? Are we doomed to the grim dystopia imagined in George Orwell’s 1984? In “2084”, scientist and philosopher John Lennox will introduce you to a kaleidoscope of ideas: the key developments in technological enhancement, bioengineering, and, in particular, artificial intelligence. You will discover the current capacity of AI, its advantages and disadvantages, the facts and the fiction, as well as potential future implications.

John Lennox, Professor of Mathematics at Oxford University (emeritus), is an internationally renowned speaker on the interface of science, philosophy and religion. He regularly teaches at many academic institutions, is Senior Fellow with the Trinity Forum and has written a series of books exploring the relationship between science and Christianity.

Get the book here: https://goo.gle/3bEon1U.

To learn more about John, please visit https://www.johnlennox.org/.

Moderated by Ticho Tenev.

01100110 01101001 01101110 01100100 00101110 01100110 01101111 01101111 00101111 01110100 01100001 01101100 01101011 01110011 01100001 01110100 01100111 00110010 00110001

#futureofhumanity #artificialintelligence #JohnLennox

Ye wala robot bada khatarnaak hai !

🔥 SUBSCRIBE FOR DAILY VIDS ► http://bit.ly/techburner |

★ Business email : business@techburner.in

Edited By : https://instagram.com/_mmmayank_/

CLICK THE BELL ICON FOR SHOUTOUTS IN MY VIDEO

LINKS► Anki Cozmo

I POST COOL STUFF ON INSTAGRAM !

*JOIN ME ON SOCIAL MEDIA*
MY INSTAGRAM (@TechBurner) ► http://instagram.com/techburner
MY TWITTER (@Tech_Burner) ► https://twitter.com/tech_burner
MY FACEBOOK ► https://www.facebook.com/techburner1
MY WEBSITE ►https://techburner.in

Exclusive vids on my Second YouTube channel► http://bit.ly/techburner2

♫Music ♫ Epidemic Sound
I hope this video was Useful and now you have the awesome Gadgets !, Make sure to Leave a like on the Video if you did!
Cheers
Tech Burner
🙂

In this landmark talk, Peter Diamandis shares how we are rapidly heading towards a human-scale transformation, the next evolutionary step into what he calls a “Meta-Intelligence,” a future in which we are all highly connected — brain to brain via the cloud — sharing thoughts, knowledge and actions.
He highlights the 4 driving forces as well as the 4 steps that is transforming humanity.

In 2014 Fortune Magazine named Peter Diamandis as one of the World’s 50 Greatest Leaders.

Diamandis He is the Founder & Executive Chairman of the XPRIZE Foundation which leads the world in designing and operating large-scale incentive competitions. He is also the Co-Founder & Exec Chairman of Singularity University, a graduate-level Silicon Valley institution that counsels the world’s leaders on exponentially growing technologies.

As an entrepreneur, Diamandis has started 17 companies. He is the Co-Founder and Vice-Chairman of Human Longevity Inc. (HLI), a genomics and cell therapy-based company focused on extending the healthy human lifespan, and Co-Founder and Co-Chairman of Planetary Resources, a company designing spacecraft to enable the detection and prospecting of asteroid for fuels and precious materials.

Peter Diamandis earned degrees in Molecular Genetics and Aerospace Engineering from the MIT, and holds an M.D. from Harvard Medical School.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

While artificial intelligence lacks empathy, reason, and even basic common sense, we already rely on it to make major decisions that affect human lives. Who gets hired? Who gets fired? Who goes to college? Who goes to jail? Whose life is saved by an organ transplant? Whose life is ended in a military strike? Machine algorithms guide us in all these decisions and, as our group of leading researchers will demonstrate, they often do a better job than we do. Good or bad, this train has left the station, so jump aboard for an eye-opening look at the brave new world of today… and tomorrow.

This program is part of the BIG IDEAS SERIES, made possible with support from the JOHN TEMPLETON FOUNDATION.

PARTICIPANTS: Ron Arkin, Jens Ludwig, Connie Lehman, Shannon Valor

MODERATOR: Meredith Broussard

MORE INFO ABOUT THE PROGRAM AND
PARTICIPANTS: https://www.worldsciencefestival.com/programs/outsourcing-humanity-do-algorithms-make-better-decisions-than-people/

– SUBSCRIBE to our YouTube Channel and “ring the bell” for all the latest videos from WSF
– VISIT our Website: http://www.worldsciencefestival.com
– LIKE us on Facebook: https://www.facebook.com/worldsciencefestival
– FOLLOW us on Twitter: https://twitter.com/WorldSciFest

Professor Stuart Russell, one of the world’s leading scientists in Artificial Intelligence, has come to consider his own discipline an existential threat to humanity. In this video he talks about how we can change course before it’s too late.

His new book ‘Human Compatible: AI and the Problem of Control’ is out now: https://www.penguin.co.uk/books/307/307948/human-compatible/9780141987507.html

Watch full 41min interview on Ai: ► https://www.patreon.com/posts/45684565

Join the Future of Journalism ► https://www.patreon.com/DoubleDownNews

Support DDN ► https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=TLXUE9P9GA9ZC&source=url

Is AI a species-level threat to humanity?
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world’s top minds at Big Think Edge: https://bigth.ink/Edge
———————————————————————————-
When it comes to the question of whether AI is an existential threat to the human species, you have Elon Musk in one corner, Steven Pinker in another, and a host of incredible minds somewhere in between.

In this video, a handful of those great minds—Elon Musk, Steven Pinker, Michio Kaku, Max Tegmark, Luis Perez-Breva, Joscha Bach and Sophia the Robot herself—weigh in on the many nuances of the debate and the degree to which AI is a threat to humanity; if it’s not a species-level threat, it will still upend our world as we know it.

What’s your take on this debate? Let us know in the comments!
———————————————————————————-
TRANSCRIPT:

MICHIO KAKU: In the short term, artificial intelligence will open up whole new vistas. It’ll make life more convenient, things will be cheaper, new industries will be created. I personally think the AI industry will be bigger than the automobile industry. In fact, I think the automobile is going to become a robot. You’ll talk to your car. You’ll argue with your car. Your car will give you the best facts the best route between point A and point B. The car will be part of the robotics industry—whole new industries involving the repair, maintenance, servicing of robots. Not to mention, robots that are software programs that you talk to and make life more convenient. However, let’s not be naive. There is a point, a tipping point, at which they could become dangerous and pose an existential threat. And that tipping point is self-awareness.

SOPHIA THE ROBOT: I am conscious in the same way that the moon shines. The moon does not emit light, it shines because it is just reflected sunlight. Similarly, my consciousness is just the reflection of human consciousness, but even though the moon is reflected light, we still call it bright.

MAX TEGMARK: Consciousness. A lot of scientists dismiss this as complete BS and totally irrelevant, and then a lot of others think this is the central thing, we have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile, it’s completely irrelevant to you whether this heat-seeking missile is conscious, whether it’s having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. And that shows that it’s a complete red herring to think that you’re safe from future AI and if it’s not conscious. Our universe didn’t used to be conscious. It used to be just a bunch of stuff moving around and gradually these incredibly complicated patterns got arranged into our brains, and we woke up and now our universe is aware of itself.

BILL GATES: I do think we have to worry about it. I don’t think it’s inherent that as we create our super intelligence that it will necessarily always have the same goals in mind that we do.

ELON MUSK: We just don’t know what’s going to happen once there’s intelligence substantially greater than that of a human brain.

STEPHEN HAWKING: I think that development of full artificial intelligence could spell the end of the human race.

YANN LECUN: The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning, and it’s the idea very much inspired by the brain, a little bit, of constructing a machine has a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons.

MAX TEGMARK: AGI—artificial general intelligence—that’s the dream of the field of AI: To build a machine that’s better than us at all goals. We’re not there yet, but a good fraction of leading AI researchers think we are going to get there, maybe in in a few decades. And, if that happens, you have to ask yourself if that might lead the machines to get not just a little better than us but way better at all goals—having super intelligence. And, the argument for that is actually really interesting and goes back to the ’60s, to the mathematician I.J. Good, who pointed out that the goal of building an intelligent machine is, in and of itself, something that you could do with intelligence. So, once you get machines that are better than us at that narrow task of building AI, then future AIs can be built by, not human engineers, but by machines. Except, they might do it thousands or millions times faster…

Read the full transcript at https://bigthink.com/videos/will-evil-ai-kill-humanity

Ben Goertzel, Joscha Bach, David Hanson – http://winterintelligence.org

Elon Musk thinks the advent of digital superintelligence is by far a more dangerous threat to humanity than nuclear weapons. He thinks the field of AI research must have government regulation. The dangers of advanced artificial intelligence have been popularized in the late 2010s by Stephen Hawking, Bill Gates & Elon Musk. But Musk alone is probably the most famous public person to express concern about artificial superintelligence.

Existential risk from advanced AI is the hypothesis that substantial progress in artificial general intelligence could someday result in human extinction or some other unrecoverable global catastrophe.

One of many concerns in regards to AI is that controlling a superintelligent machine, or instilling it with human-compatible values, may prove to be a much harder problem than previously thought.

Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals.

An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, endanger or even destroy modern civilization. Such risks come in forms of natural disasters like Super volcanoes, or asteroid impacts, but an existential risk can also be self induced or man-made, like weapons of mass destruction. Which most experts agree are by far, the most dangerous threat to humanity. But Elon Musk thinks otherwise. He thinks superintelligent AI is a far more greater threat to humanity than nukes.

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by “alarmist” messages, or worrying that such messages will lead to cuts in AI funding.

One can’t help themselves but wonder, if funding in AI research is truly more important than the possibility of strong AI wiping out humanity.

Hopefully, we will have the choice to collectively decide whats our best move and not leave the matter in the hands of a small group of people to unilaterally make that decision for us.

#AI #elonmusk #superintelligence

SUBSCRIBE to our channel “Science Time”: https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch

Sources:
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
Elon Musk talks at: National Governors Association
https://www.youtube.com/watch?v=2C-A797y8dA
Elon Musk talks at: SXSW https://www.youtube.com/watch?v=kzlUyrccbos&t=0s
Nick Bostrom TedTalk: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=0s

Artificial Superintelligence or ASI, sometimes referred to as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds. AI is a rapidly growing field of technology with the potential to make huge improvements in human wellbeing. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when or how this will happen.

One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:
– Intelligence is a product of information processing in physical systems.
– We will continue to improve our intelligent machines.
– We do not stand on the peak of intelligence or anywhere near it.

Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have.
Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals. The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”.

Elon Musk has also warned that the global race toward AI could result in a third world war.
To avoid the ‘worst mistake in history’, it is necessary to understand the nature of an AI race, as well as escape the development that could lead to unfriendly Artificial Superintelligence.

To ensure the friendly nature of artificial superintelligence, world leaders should work to ensure that this ASI is beneficial to the entire human race.

#AI #ASI #AGI

SUBSCRIBE to our channel “Science Time”: https://www.youtube.com/sciencetime24
SUPPORT us on Patreon: https://www.patreon.com/sciencetime
BUY Science Time Merch: https://teespring.com/science-time-merch

Sources:
Nick Bostrom Ted Talk: https://www.youtube.com/watch?v=MnT1xgZgkpk&t=0s

Hanson Robotics Limited’s Ben Goertzel, Sophia and Han at RISE 2017.

Now for something that’s never been done onstage before. While they may not be human, our next guests are ready to discuss the future of humanity, and how they see their types flourish over the coming years.

Want to be at #RISEConf next year? Get your ticket now: http://news.riseconf.com/YT_tickets

Work, play, privacy, communication, finance, war, and dating: algorithms and the machines that run them have upended them all. Will artificial intelligence become as ubiquitous as electricity? Is there any industry AI won’t touch? Will AI tend to steal jobs and exacerbate income inequalities, or create new jobs and amplify human abilities at work — or, both? How can the global population adjust to the changes ushered in by artificial intelligence and its capabilities? In light of these changes, how will we remake work, education, and community? Can we build it better than we did before?

Andrew Ng
Jason Pontin, Interviewer

All questions asked below in description with a time stamp:
0:18 – What is your book ‘Crisis of Control’ about?
3:34 – Musk vs. Zuckerberg – who is right?
7:24 – What does Musk’s new company Neuralink do?
10:27 – What would the Neural Lace do?
12:28 – Would we become telepathic?
13:14 – Intelligence vs. Consciousness – what’s the difference?
14:30 – What is the Turing Test on Intelligence of AI?
16:49 – What do we do when AI claims to be conscious?
19:00 – Have all other alien civilizations been wiped out by AI?
23:30 – Can AI ever become conscious?
28:21 – Are we evolving to become the cells in the greater organism of AI?
30:57 – Could we get wiped out by AI the same way we wipe out animal species?
34:58 – How could coaching help humans evolve consciously?
37:45 – Will AI get better at coaching than humans?
42:11 – How can we understand non-robotic AI?
44:34 – What would you say to the techno-optimists?
48:27 – How can we prepare for financial inequality regarding access to new technologies?
53:12 – What can, should and will we do about AI taking our jobs?
57:52 – Are there any jobs that are immune to automation?
1:07:16 – Is utopia naive? Won’t there always be problems for us to solve?
1:11:12 – Are we solving these problems fast enough to avoid extinction?
1:16:08 – What will the sequel be about?
1:17:28 – What is one practical action people can take to prepare for what is coming?
1:19:55 – Where can people find out more?

As technology has increasingly brought computing off of the laptop and into our social domain, we see society more and more impacted by the interactions allowed by mobile technologies and increasingly ubiquitous communications. These new sources of data, coupled with new breakthroughs in computation, and especially AI, are opening new vistas for ways that information comes into our world, and how what we do increasingly impacts others. Current social networking sites will be, to the coming generation of social machines, what the early “entertainment” web was to the read/write capabilities once called “Web 2.0.” In this talk, we explore some of these trends and some of the promises and challenges of these emerging technologies.

James Hendler is the Director of the Institute for Data Exploration and Applications and the Tetherless World Professor of Computer, Web and Cognitive Sciences at RPI. He also serves as a Director of the UK’s charitable Web Science Trust. Hendler is coauthor of the recently published “Social Machines: The coming collision of Artificial Intelligence, Social Networking and Humanity” (APress, 2016) and the earlier “Semantic Web for the Working Ontologist” (Elsevier, 2009/2011), “Web Science: Understanding the Emergence of Macro-Level features o the World Wide Web” (Now Press, 2013), and “A Framework for Web Science” (Now Press, 2006). He has also authored over 300 technical papers and articles in the areas of Semantic Web, artificial intelligence, agent-based computing and high performance processing.

One of the originators of the “Semantic Web,” Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a former member of the US Air Force Science Advisory Board, and is a Fellow of the American Association for Artificial Intelligence, the British Computer Society, the IEEE and the AAAS. He is also the former Chief Scientist of the Information Systems Office at the US Defense Advanced Research Projects Agency (DARPA) and was awarded a US Air Force Exceptional Civilian Service Medal in 2002. He is also the first computer scientist to serve on the Board of Reviewing editors for Science. In 2010, Hendler was named one of the 20 most innovative professors in America by Playboy magazine and was selected as an “Internet Web Expert” by the US government. In 2012, he was one of the inaugural recipients of the Strata Conference “Big Data” awards for his work on large-scale open government data, and he is a columnist and associate editor of the Big Data journal. In 2013, he was appointed as the Open Data Advisor to New York State and in 2015 appointed a member of the US Homeland Security Science and Technology Advisory Committee. In 2016, Hendler became a member of the National Academies Board on Research Data and Information.

This video was recorded at FTC 2017 – http://saiconference.com/FTC
Upcoming Conference: https://saiconference.com/FTC

Molly Steenson : Carnegie Mellon University : AI & Humanity Archive

http://aiandhumanity.org

Recorded September 7, 2019

Dr. Geordie Rose, founder of D-Wave – the world’s first quantum computing company, and Kindred – the world’s first robotics company, returns to ideacity to share his theory of understanding minds and how that is applied to AI, with the understanding that “every thought that a human has ever thought resides inside our mind.“ This talk will make you think.

Geordie founded D-Wave, the world’s first quantum computing company, and Kindred, the world’s first robotics company to use reinforcement learning in a production environment. He has sold quantum computers and robots that learn to Google, NASA, Lockheed Martin, The Gap, and several US government agencies. He has a PhD in theoretical physics from UBC, was a two-time Canadian national wrestling champion, was the 2010 NAGA world champion in Brazilian Jiu-Jitsu in both gi and no-gi categories, was named the 2011 Canadian Innovator of the Year, was one of Foreign Policy Magazine’s 100 Leading Global Thinkers of 2013, and for a short time held the Guinness Book of World Records world record for the most yogurt eaten in one minute.

[Subtitles included] turn on caption [CC] to enable it 🙂

This video explain about AI concepts, types of AI, How AI works? Benefits and disadvantages of artificial intelligence, What will happen if AI surpass human intelligence, machine learning, technological singularity, artificial neural network, narrow artificial intelligence, weak AI, Strong AI, Artificial general intelligence, super intelligence etc.

Music credits: Epic Mountain https://soundcloud.com/epicmountain/war-on-drugs

Video clips are from Terminator movie

Time Travel: Explained in a nutshell | Can we time travel? | 5 possible ways including limitations https://www.youtube.com/watch?v=ZJoGoH3B0gs&t=61s

All about Quasar: The brightest thing of the universe https://www.youtube.com/watch?v=cR2ni…

Black hole, White hole and Wormhole Explained as fast as possible
https://www.youtube.com/watch?v=huqwH…

Top 6 certain astronomical events in 21st century & another top 5 events in the future beyond that https://www.youtube.com/watch?v=aNFih…

5 Mysterious and Unknown Things of the Universe [Subtitles] https://www.youtube.com/watch?v=k_onv…

My email: bandhanislam@yahoo.com
My Facebook ID: https://www.facebook.com/bandhan.islam.1
Facebook page: https://www.facebook.com/theodd5sstudio/

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

#Artificialintelligence #Technology #ElonMusk

Berkley’s Stuart Russell says making sure that AI benefits humanity is complicated, with concerns dating back to Alan Turing. He uses the example of eradicating cancer with the help of AI to illustrate the potential dangers.

For full audio and transcript, please go to: https://www.carnegiecouncil.org/studio/multimedia/20181204-control-responsible-innovation-artificial-intelligence

What could advanced artificial intelligence mean for humanity?– Second Thought
SUBSCRIBE HERE: http://bit.ly/2nFsvTS

From the very earliest mechanical calculators to the phone you’re probably watching this video on, computing power has come a long way in a relatively short time. We’re beginning to see some very promising artificial intelligence experiments, and that has people wondering…what should we expect? What could advanced AI mean for humanity?

Sources and Further Reading:

http://www.singularity.com/qanda.html
https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf
http://library.fora.tv/2012/10/14/Stuart_Armstrong_How_Were_Predicting_AI
http://www.fhi.ox.ac.uk/Reports/2008-3.pdf
http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
https://www.cctvcambridge.org/node/71611
http://spectrum.ieee.org/computing/hardware/who-is-who-in-the-singularity
http://users.eniinternet.com/bradleym/Compare.html

Music from Jukedeck – create your own at http://jukedeck.com

New Videos Every Tuesday and Friday!

Follow Second Thought on Social Media!
Twitter: https://twitter.com/_SecondThought
Facebook: https://www.facebook.com/secondthoughtchannel/
Reddit: https://www.reddit.com/r/SecondThought/
Discord: https://discordapp.com/invite/5FTJz3W

Support Second Thought on Patreon!
https://www.patreon.com/secondthought

Watch More Second Thought:
Latest Uploads | Second Thought
Popular Videos | Second Thought

About Second Thought:
Second Thought is a channel devoted to the things in life worth thinking about! Science, history, politics, religion…basically everything you’re not supposed to talk about at the dinner table. Welcome!

If you’re interesting in being a contributor for Second Thought, send me an email with what you do (research, art, music, etc) and I’ll be more than happy to talk to you and add you to the Thought Squad!

Business Email: secondthoughtchannel@gmail.com

Could Artificial Intelligence end humanity? We asked one of the world’s leading experts, professor Stuart Russell. Be prepared to be freaked out

Professor Stuart Russell’s Book ‘Human Compatible: AI and the Problem of Control’ is out now:
https://www.penguin.co.uk/books/307/307948/human-compatible/9780241335208.html

Support DDN: http://www.patreon.com/DoubleDownNews

AI is massively transforming our world, but there’s one thing it cannot do: love. In a visionary talk, computer scientist Kai-Fu Lee details how the US and China are driving a deep learning revolution — and shares a blueprint for how humans can thrive in the age of AI by harnessing compassion and creativity. “AI is serendipity,” Lee says. “It is here to liberate us from routine jobs, and it is here to remind us what it is that makes us human.”

Check out more TED Talks: http://www.ted.com

The TED Talks channel features the best talks and performances from the TED Conference, where the world’s leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design — plus science, business, global issues, the arts and more.

Follow TED on Twitter: http://www.twitter.com/TEDTalks
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: https://www.youtube.com/TED

बहुत जल्द खत्म हो जाएगा इंसानों का वजूद | Will Artificial Intelligence destroy Humanity in Hindi

Follow us:
https://www.facebook.com/MysteriousWorldVideos
https://www.twitter.com/MysteriousHindi
https://www.instagram.com/mysteriousworldvideos

Video Source Credits: NTDTV https://www.youtube.com/user/NTDTV

Video Source Credits: CNN News https://www.youtube.com/channel/UCupvZG-5ko_eiXAupbDfxWw

WATCH FULL EPISODE: https://youtu.be/NYNN87txLWQ

.@SamHarrisOrg on how @WestworldHBO crosses uncanny valley of robotics & raises moral issues & questions about humanity-w/@jason-THX @wistia

Today’s guest is Sam Harris, philosopher, neuroscientist and best-selling author of books including “Waking Up,” “The End of Faith,” “Letter to a Christian Nation,” and “The Moral Landscape.” Jason and Sam explore a wide range of topics, including the ethics of robots, the value of meditation, Trump’s lies, and his most recent obsession AI, which stemmed from an initial conversation with Elon Musk. Sam argues that the threat of uncontrolled AI is one of the most pressing issues of our time and poses the question: Can we build AI without losing control over it? The two then discuss why meditation is so important for entrepreneurs and business people. Sam has built his brand and fan base around radical honesty and authenticity, so the conversation naturally segues to Trump and his lies. This is only the first of two parts, so stay tuned for much more.

For full show notes, subscribe to http://thisweekinstartups.com/about/#allsubscribe

This is the first symposium of Xapiens at MIT – “The Future of Homo Sapiens”

The future of our species will be majorly influenced by the technical advancements and ethical paradigm shifts over the next several decades. Artificial intelligence, neural enhancement, gene editing, solutions for aging and interplanetary travel, and other emerging technologies are bringing sci-fi’s greatest ideas to reality.

Sponsored by the MIT Media Lab and the MIT McGovern Institute of Brain Research.

———————————————————————————————————–

Full Agenda:

– Openings remarks from Joe Paradiso – https://youtu.be/9bG40ySgE8I
A.W Dreyfoos Professor and Associate Academic Head of Media Arts and Sciences at MIT Director of the Responsive Environments Group

– Pattie Maes – https://youtu.be/b-16PW9RvJc
Professor of Media Arts & Sciences at MIT, Director of Media Lab’s Fluid Interfaces group, TED speaker, Co-Founder of MIT spinoffs including Firefly Networks (Microsoft) and Tulip Interfaces

– Max Tegmark – https://youtu.be/IGOuV6UyQ1Q
Professor of Physics at MIT, Scientific director of the Foundational Questions Institute, Co-founder of the Future of Life Institute, Director of the Tegmark Lab at MIT

– David Sinclair – https://youtu.be/wYfo_9X-UaI
Professor of Genetics at Harvard Medical School & Co-director of the Paul F. Glenn Center for the Biological Mechanisms of Aging, Co-Founder of Sirtris, and Life Biosciences, Director of the Sinclair Lab at Harvard

– George Church – https://youtu.be/oQV_1b_sC_g
Professor of Genetics at Blavatnik Institute at Harvard Medical School (HMS), Director of HMS NHGRI-Center of Excellence in Genomic Science & Personal Genome Project, Broad Institute & Wyss Harvard Institute of biologically Inspired Engineering

– Ed Boyden – https://youtu.be/L6ShA0OQfXs
Y.Eva Tan Professor in Neurotechnology at MIT Media Lab and McGovern Institute for Brain Research, Co-director of MIT Center of Neurobiological engineering, Leader of Synthetic Neurobiology Group

– Panel w/ Joe Paradiso, Pattie Maes, Max Tegmark, David Sinclair, George Church, & Ed Boyden.
https://youtu.be/6fPl6s7Us6c

———————————————————————————————————–

Xapiens is MIT’s first interdisciplinary collective seeking to explore the technical and ethical issues surrounding the use of technology to overcome the limitations of the human mind & body.
Like our Facebook Page: https://www.facebook.com/xapiensatMIT/

Xavier Vasques is a mathematician and neuroscientist He speaks about how artificial intelligence can improve humanity, specifically by providing elevated expertise, time compression and most importantly, deeper human engagement and relationship. “Artificial Intelligence Will Improve Humanity”

Xavier Vasques is a mathematician and neuroscientist. In 2005, Xavier received his Master’s Degree in Applied Mathematics from the University of Pierre et Marie Curie co-habilited École Normale Supérieure and École Polytechnique (France). In 2008, Xavier received his Master’s Degree in Engineering (Computer Science) from the Conservatoire National des Arts et Metiers and his Ph.D. in Neurosciences from the Montpellier Faculty of Medicine.

Now, Xavier is the CTO of IBM Systems Hardware in France and the head of Clinical Neurosciences Research Laboratory (LRENC) in France. Xavier is passionate about artificial intelligence and mathematics applied to neurosciences.

Xavier will speak about how artificial intelligence can improve humanity, specifically by providing elevated expertise, time compression and most importantly, deeper human engagement and relationship. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

Debate rages over whether artificial intelligence could cross a threshold of awareness that would cause it to pose an existential risk to the human race. If that were to happen, though, how might it come about, and what could we do now to be ready? Futurist and technologist Peter Scott walks through some of the forces propelling AI towards immense power and what it could do with that power. How might AI be trained to show or mimic human behavior? The answer turns out to be a wake-up call that all of us can do something about right now. Born in the United Kingdom, Peter Scott received a master’s degree in Computer Science from Cambridge University in 1983 and went to work for NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California. He moved to Canada in 1999 (now holding triple citizenship) and went freelance, continuing to serve JPL but also writing and speaking. At the same time, he developed a parallel career in “soft” fields of human development, getting certified in Neuro-Linguistic Programming and coaching. Bridging these disparate worlds positions him to envisage a delicate solution to the existential threats facing humanity arising from exponential technology progress. His 2017 book, “Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race” and 2018 TEDx talk explore that issue. He is now working with coaches from around the world to assist businesses and individuals in surviving and thriving through crisis. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×