PodcastIntel
Sign in Get Started Free
Neural intel Pod

Neural intel Pod

Neuralintel.org · News · EN-US

Neural Intel Pod provides fast, technical breakdowns of major AI news and developments, from new model releases to leaked research. The podcast is narrated by AI for clarity and speed, targeting researchers, engineers, and builders who want in-depth technical context without the hype.

339
Episodes
15
Guests

Episodes (Page 2)

Feb 18, 2026 · 00:15:17
Grok 4.20 uses stateful Python execution and X semantic search.
00:15:17
Feb 16, 2026 · 00:16:18
Fiber optics achieve 256 Tb/s, enabling trillion-parameter models via pipelined transmission.
00:16:18
Feb 15, 2026 · 00:36:31
Dario Amodei estimates 90% probability of human-level AI by 2035.
Dario Amodei
00:36:31
Feb 15, 2026 · 00:34:46
Peter Steinberger created OpenClaw, a self-modifying AI dismantling the app market.
Peter Steinberger
00:34:46
Feb 14, 2026 · 00:35:46
MiniMax M2.5 offers frontier model capabilities without user cost concerns.
00:35:46
Feb 12, 2026 · 00:12:53
Zhipu AI unveils GLM-5, a 744B parameter AI model.
00:12:53
Feb 4, 2026 · 00:29:59
OpenClaw autonomous AI swarm architecture exhibits critical security vulnerabilities with 2/100 security score
00:29:59
Jan 29, 2026 · 00:13:10
Explores panpsychism and cosmic consciousness hypothesis drawing on Rupert Sheldrake's work
00:13:10
Jan 22, 2026 · 00:26:56
Sensitivity analysis quantifies how model output uncertainty derives from input variable uncertainty
00:26:56
Jan 19, 2026 · 00:50:28
Deep exploration of MIT's algorithmic decision-making framework covering probabilistic reasoning and Bayesian networks
00:50:28
Jan 9, 2026 · 00:29:53
Chinese logographic characters (hanzi) provide linguistic density advantage enabling token-efficient reasoning in AI models
00:29:53
Jan 7, 2026 · 00:30:15
Distinguishes regression (continuous outputs) from classification (discrete labels) in machine learning fundamentals
00:30:15
Jan 5, 2026 · 00:34:40
Iterative deployment with explicit quality filtering triggers emergent generalization despite synthetic data training concerns
00:34:40
Jan 1, 2026 · 00:28:44
DeepSeek's mHC uses Birkhoff polytope to treat residual mapping as convex combination of permutations for norm preservation
00:28:44
Dec 25, 2025 · 00:30:18
DeepSeek V3 and Mistral Large both deploy 128-expert MoE architectures with shared vocabulary (129K) and embeddings (7,168)
00:30:18
Dec 24, 2025 · 00:33:32
GLM-4.7 (358B parameters) achieves 41% reasoning improvement over predecessor with Preserved Thinking across multi-turn dialogue
00:33:32
Dec 23, 2025 · 00:27:12
Medmarks v0.1 benchmark introduces MedXpertQA reasoning-heavy tasks saturating previous medical AI benchmarks
00:27:12
Dec 21, 2025 · 00:35:23
RLVR (Reinforcement Learning from Verifiable Rewards) replaces RLHF as primary LLM training endpoint enabling reasoning development
00:35:23
Dec 18, 2025 · 00:13:05
neural_net_checklist automates diagnostic process for training neural networks based on Karpathy's training recipe
00:13:05
Dec 16, 2025 · 00:40:38
Nemotron 3 Nano uses Hybrid Mamba-Transformer MoE architecture with 31.6B total parameters but only 3.2B active per token, delivering 4x higher throughput and 3.3x faster inference than comparable ...
00:40:38

Track New Episodes & Guest Appearances

Subscribe to get AI-powered episode summaries, guest detection, and weekly email digests.

Sign Up Free →