THE FUTURE IS HERE

'Governing Superintelligence' – Synthetic Pathogens, The Tree of Thoughts Paper and Self-Awareness

Two documents released in the last few hours and days, OpenAI’s ‘Governance of Superintelligence’ and DeepMind’s ‘Model Evaluation for Extreme Risks’, reveal that the top AGI labs are trying hard to think of how to live with, and govern, a superintelligence. I want to cover what they see coming. I’ll show you persuasive evidence that the GPT 4 model has been altered and now gives different outputs from two weeks ago. And I’ll look at the new Tree of Thoughts and CRITIC prompting systems that might constitute ‘novel prompt engineering’. I’ll also touch on the differences among the AGI lab leaders and what comes next.

This video will cover everything from what I think is the most pressing threat, synthetic bioweapons, among the many threats, including a situationally aware superintelligence, deepfakes, audio manipulation. I’ll delve into what Dario Amodei thinks, the secretive head of Anthropic, and Emad Mostaque, Sam Altman’s new interview, Sundar Pichai and even Rishi Sunak’s meeting.

In the middle of the video I will touch on not just the tree of thought prompting method but also snowballing hallucinations (featured in a new paper), code interpreter on the MMLU and how we shouldn’t underestimate GPT models.

Governance of Superintelligence: https://openai.com/blog/governance-of-superintelligence
Model evaluation for extreme risks: https://arxiv.org/pdf/2305.15324.pdf
Sunak Meeting: https://twitter.com/RishiSunak/status/1661466378271072257/photo/1
Altman Self Awareness Interview, Wisdom 2.0: https://www.youtube.com/watch?v=hn1Y6GVWUV0
CRITIC Paper: https://arxiv.org/pdf/2305.11738.pdf
Tree of Thoughts Paper: https://arxiv.org/pdf/2305.10601.pdf
Snowballing Hallucinations: https://arxiv.org/pdf/2305.13534.pdf
GPT 4 is Shockingly Stupid Ted Talk: https://www.youtube.com/watch?v=SvBR0OGT5VI
Deepmind Alignment Team Response to List of Lethalities: https://www.alignmentforum.org/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments
Altman Collison Interview: https://www.youtube.com/watch?v=1egAKCKPKCk&t=484s
Emergent Scientific Research: https://arxiv.org/ftp/arxiv/papers/2304/2304.05332.pdf
Manhattan Project AI Safety: https://www.salon.com/2023/05/18/why-we-need-a-manhattan-project-for-ai-safety/?s=09
60 Minutes Tweet: https://twitter.com/60Minutes/status/1660428419438354435
Fake Image S&P: https://twitter.com/KobeissiLetter/status/1660664125574217731
Deepfake TED Talk: https://www.youtube.com/watch?v=SHSmo72oVao
Can We Stop the Singularity: https://www.newyorker.com/science/annals-of-artificial-intelligence/can-we-stop-the-singularity
Engineering the Apocalypse: https://www.youtube.com/watch?v=UaRfbJE1qZ4
Emad Mostaque Tweet: https://twitter.com/EMostaque/status/1660983126783295491
Sundar Pichai FT: https://www.ft.com/content/8be1a975-e5e0-417d-af51-78af17ef4b79
Dario Amodei Interview: https://www.youtube.com/watch?v=uAA6PZkek4A
Claude Next: https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/?guccounter=1&guce_referrer=aHR0cHM6Ly90LmNvLw&guce_referrer_sig=AQAAANOE4HPKUImNWs6-aI_2ZaftGXKd-v04T3WrVl6Q69pJnWZjD_O5XvQfaRSf7dpHkt1QMg909duocg7Ks8whUwuhhJhh8O2Pjbak-u2T4gxOJgPY7DqXpRwFl-ZWRbpfYwnZvV6eMnWsGJNAciMXANPw_MYHR3R6AkBMe_Qtqq7z
Altman Blog: https://moores.samaltman.com/
Liron Shapira Tweet on Yann LeCun: https://twitter.com/liron/status/1659618568282185728

https://www.patreon.com/AIExplained