PodcastIntel
Sign in Get Started Free
NVIDIA AI Podcast
NVIDIA AI Podcast

Lowering the Cost of Intelligence With NVIDIA's Ian Buck - Ep. 284

Dec 29, 2025 · 38m
AI Summary
  • Mixture-of-experts (MoE) architecture enables more intelligent AI models without proportional increases in compute and cost requirements
  • Ian Buck explains how MoE models work and the critical role of co-design across compute, networking, and software to maximize their potential
  • MoE represents a path to more efficient frontier AI models by specializing computation rather than scaling uniformly

Guests on This Episode

IB
Ian Buck
2 podcast appearances

More from NVIDIA AI Podcast

View all episodes →

Get AI Summaries for Every New Episode

Subscribe to NVIDIA AI Podcast and get AI summaries, guest tracking, and email digests delivered automatically.

Sign Up Free →