PodcastIntel
Sign in Get Started Free
Lex Fridman Podcast
Lex Fridman Podcast

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Mar 30, 2023 · 3:22:35
AI Summary
  • Eliezer Yudkowsky, AI safety researcher, discusses the dangers of superintelligent AI and existential risks to humanity
  • Conversation covers alignment problems, the challenge of ensuring advanced AI systems pursue beneficial goals
  • Topics include AI safety concerns, the potential for catastrophic outcomes, and technical approaches to AI alignment

Guests on This Episode

EY
Eliezer Yudkowsky
1 podcast appearance

More from Lex Fridman Podcast

View all episodes →

Get AI Summaries for Every New Episode

Subscribe to Lex Fridman Podcast and get AI summaries, guest tracking, and email digests delivered automatically.

Sign Up Free →