Episodes (Page 4)
✨
Blog post examines whether AI scaling will continue producing capability gains or hit fundamental limits
✨
Jung Chang lived through China's Cultural Revolution as daughter of a denounced official; witnessed CCP totalitarianism subjugating a billion people through systematic persecution
Andrew Roberts — Why Hitler lost WWII, Churchill as applied historian, & Napoleon as startup founder
✨
Andrew Roberts examines why Nazi ideology cost Hitler WW2 and how Churchill functioned as an applied historian using historical lessons to inform decisions
✨
Dominic Cummings details catastrophic failures in Western government revealed by COVID response; civil service incompetence and institutional dysfunction enabled preventable crises
✨
Paul Christiano has modest AGI timelines: 40% by 2040, 15% by 2030; addresses whether RLHF invention was regrettable and whether alignment is necessarily dual-use
✨
Shane Legg expects AGI around 2028; aligning superhuman models requires different approaches than current RL methods and new architectural innovations beyond transformers
✨
Grant Sanderson argues advanced mathematics doesn't require AGI; mathematically talented students should pursue foundational research, teaching, or AI alignment rather than finance
✨
Continental vs maritime power mentalities explain Xi and Putin's strategic errors; dictators consistently misread adversary resolve and overestimate military advantages leading to catastrophic misc...
✨
Dario Amodei shares insights on AI breakthroughs and model scaling.
✨
Andy Matuschak details his intense and effective textbook learning process.
Carl Shulman (Pt 2) — AI Takeover, bio & cyber attacks, detecting deception, & humanity's far future
✨
Carl Shulman outlines potential AI takeover scenarios.
AI Takeover
✨
Carl Shulman presents a model for rapid AI intelligence explosion.
✨
Comparison of AI progress to the Manhattan Project.
Eliezer Yudkowsky — Why AI will kill us, aligning LLMs, nature of intelligence, SciFi, & rationality
✨
Eliezer Yudkowsky argues AI poses an existential threat.
Ilya Sutskever (OpenAI Chief Scientist) — Why next-token prediction could surpass human intelligence
✨
Ilya Sutskever discusses the potential of next-token prediction to surpass human intelligence.
✨
Nat Friedman, former GitHub CEO, discusses ancient scrolls and open source.
✨
Brett Harrison details his tenure as FTX US president and SBF's leadership.
✨
Marc Andreessen explores AI's revolution of software and future possibilities.
✨
Garett Jones explains the significance of national IQ and cultural values.
✨
Lars Doucet explores Georgism and the reasons behind high rents.