PodcastIntel
Sign in Get Started Free
Future of Life Institute

Can AI Do Our Alignment Homework? (with Ryan Kidd)

Feb 6, 2026 · 1:46:33
AI Summary
  • Discusses AGI timelines, model deception risks, and AI safety research.
  • Ryan Kidd outlines MATS research tracks and key researcher archetypes.
  • Provides advice for those considering a career in AI safety.

Guests on This Episode

More from Future of Life Institute

Mar 31, 2026 · 0:25:29
View all episodes →

Get AI Summaries for Every New Episode

Subscribe to Future of Life Institute and get AI summaries, guest tracking, and email digests delivered automatically.

Sign Up Free →