Marcos Lopez de Prado, Principal and Head of Machine Learning, AQR Capital Management. Marcos presents at the annual AQR Asset Management Institute event: Insight Summit 2018.

This annual event distils the best insights on critical issues impacting investment industry today bringing together the academics, practitioners and regulator communities.

See more about the event: https://bit.ly/2ExIY9l

Subscribe on YouTube: http://bit.ly/2fQAm0p
Follow on Twitter: http://bit.ly/2FKNIFe

Discussion points:
– In-group convergence: thinking in true & false vs right & wrong
– The group mind may be more stupid than the smartest individuals in the group

Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany, and has built computational models of motivated decision making, perception, categorization, and concept-formation. He is especially interested in the philosophy of AI and in the augmentation of the human mind.

Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin and the Institute for Cognitive Science at Osnabrück. His book “Principles of Synthetic Intelligence” (Oxford University Press) is available on amazon now:
https://www.amazon.com/Principles-Synthetic-Intelligence-PSI-Architectures/dp/0195370678

Many thanks for watching!

Consider supporting SciFuture by:
a) Subscribing to the SciFuture YouTube channel: http://youtube.com/subscription_center?add_user=TheRationalFuture

b) Donating
– Bitcoin: 1BxusYmpynJsH4i8681aBuw9ZTxbKoUi22
– Etherium: 0xd46a6e88c4fe179d04464caf42626d0c9cab1c6b
– Patreon: https://www.patreon.com/scifuture

c) Sharing the media SciFuture creates: http://scifuture.org

Kind regards,
Adam Ford
– Science, Technology & the Future

In this talk Galina Shubina will draw on her experiences to discuss how implicit and explicit biases affect our daily lives, and how machine learning algorithms suffer from similar mindbugs.

Galina Shubina has a background in software engineering and all things data. She has two children, three citizenships, and way too many books. Galina has a Bachelors, a Masters, as well as an unfinished PhD in computer science and mathematics. Over a decade of her career was spent at Google and she currently runs her own one-person company. Galina is passionate about improving gender representation and happiness in the tech industry, and challenging herself and others to think outside their boxes in their work and daily lives.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

In the glorious AI-assisted future, all decisions are objective and perfect, and there’s no such thing as cognitive biases. That’s why we created AI and machine learning, right? Because humans can make mistakes, and computers are perfect. Well, there’s some bad news: humans make those AIs and machine learning models, and as a result humanity’s biases and missteps can subtly work their way into our AI and models

All hope isn’t lost, though! In this talk you’ll learn how science and statistics have already solved some of these problems and how a robust awareness of cognitive biases can help with many of the rest. Come learn what else we can do to protect ourselves from these old mistakes, because we owe it to the people who’ll rely on our algorithms to deliver the best possible intelligence!

NDC Conferences
https://www.ndcconferences.com/
https://ndcminnesota.com/

When it comes to decision making, it might seem that computers are less biased than humans. But algorithms can be just as biased as the people who create them.

signup.axios.com

Increasingly, algorithms and machine learning are being implemented at various touch points throughout the criminal justice system, from deciding where to deploy police officers to aiding in bail and sentencing decisions. The question is, will this tech make the system more fair for minorities and low-income residents, or will it simply amplify our human biases?

We all know humans are imperfect. We’re subject to biases and stereotypes, and when these come into play in the criminal justice system, the most disadvantaged communities end up suffering. It’s easy to imagine that there’s a better way, that one day we’ll find a tool that can make neutral, dispassionate decisions about policing and punishment.

Some think that day has already arrived.

Around the country, police departments and courtrooms are turning to artificial intelligence algorithms to help them decide everything from where to deploy police officers to whether to release defendants on bail.

Supporters believe that the technology will lead to increased objectivity, ultimately creating safer communities. Others however, say that the data fed into these algorithms is encoded with human bias, meaning the tech will simply reinforce historical disparities.

Learn more about the ways in which communities, policemen and judges across the U.S. are using these algorithms to make decisions about public safety and people’s lives.

» Subscribe to CNBC: http://cnb.cx/SubscribeCNBC

About CNBC: From ‘Wall Street’ to ‘Main Street’ to award winning original documentaries and Reality TV series, CNBC has you covered. Experience special sneak peeks of your favorite shows, exclusive video and more.

Connect with CNBC News Online
Get the latest news: http://www.cnbc.com/
Follow CNBC on LinkedIn: https://cnb.cx/LinkedInCNBC
Follow CNBC News on Facebook: http://cnb.cx/LikeCNBC
Follow CNBC News on Twitter: http://cnb.cx/FollowCNBC
Follow CNBC News on Google+: http://cnb.cx/PlusCNBC
Follow CNBC News on Instagram: http://cnb.cx/InstagramCNBC

#CNBC

How AI Could Reinforce Biases In The Criminal Justice System

As humans we’re inherently biased. Sometimes it’s explicit and other times it’s unconscious, but as we move forward with technology how do we keep our biases out of the algorithms we create? their programming? Documentary filmmaker Robin Hauser argues that we need to have a conversation about how AI should be governed and ask who is responsible for overseeing the ethical standards of these supercomputers. “We need to figure this out now,” she says. “Because once skewed data gets into deep learning machines, it’s very difficult to take it out.”

About the TED Institute: We know that innovative ideas and fresh approaches to challenging problems can be discovered inside visionary companies around the world. The TED Institute helps surface and share these insights. Every year, TED works with a group of select companies and foundations to identify internal ideators, inventors, connectors, and creators. Drawing on the same rigorous regimen that has prepared speakers for the TED main stage, TED Institute works closely with each partner, overseeing curation and providing intensive one-on-one talk development to sharpen and fine tune ideas.

Learn more at http://www.ted.com/ted-institute

Follow TED Institute on Twitter @TEDPartners
Subscribe to our channel: https://www.youtube.com/user/TEDInstitute

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×