The Neuron: AI Explained
The Neuron: AI Explained

The Neuron: AI Explained

The Neuron

Overview
Episodes

Details

The Neuron is a daily newsletter with 700,000+ readers that covers the latest AI developments, trends and research; this is our podcast, hosted by Grant Harvey and Corey Noles. We aim to create digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available Wednesdays and Sundays on all podcasting platforms and YouTube. Subscribe to our newsletter: https://www.theneurondaily.com/subscribe

Recent Episodes

BONUS: GPT 5.5 LIVE - The New GPT "Spud" Model is Here; Let's Break It
APR 25, 2026
BONUS: GPT 5.5 LIVE - The New GPT "Spud" Model is Here; Let's Break It
OpenAI dropped GPT-5.5, so we did the only reasonable thing: went live immediately and tried to break it.In this off-the-cuff Neuron Live, Corey and Grant walk through OpenAI's GPT-5.5 release notes, benchmark claims, rollout details, and early access reactions before testing the model live across coding, reasoning, creativity, web research, and absurd prompt challenges. We also compare a few GPT-5.5 responses against Claude Opus 4.7, test Codex, build a new version of Cat Doom, and ask the important questions, like whether a sentient vending machine that only dispenses expired tuna salad deserves to live.In this episode, we cover:• What OpenAI says is new in GPT-5.5• GPT-5.5’s improvements in coding, computer use, research, and knowledge work• Early benchmark results across Terminal-Bench, GDPval, Frontier Math, BrowseComp, and scientific research tasks• Why token efficiency may matter as much as raw intelligenceGPT-5.5’s rollout across ChatGPT, Codex, Plus, Pro, Business, and Enterprise• Live Codex testing with a one-shot Cat Doom game buildCreative stress tests involving palindromes, time-traveling potatoes, dystopian vending machines, and Lord of the Rings product reviews• First impressions of whether GPT-5.5 feels meaningfully different from GPT-5.4 and Claude Opus 4.7This was not a formal benchmark. It was a first-contact livestream: messy, fast, weird, and exactly the kind of test we like.Subscribe for more AI breakdowns, live model tests, beginner-friendly explainers, and weirdly useful prompt experiments from The Neuron.Sign up for The Neuron newsletter: https://www.theneuron.ai/Follow along for more AI news, analysis, and live experiments.
play-circle icon
99 MIN
BONUS: LIVE: Claude Opus 4.7 Just Dropped. Here's What Actually Changed.
APR 17, 2026
BONUS: LIVE: Claude Opus 4.7 Just Dropped. Here's What Actually Changed.
Grant and Kyle dive into a comprehensive review and live test of the newly released Claude Opus 4.7, a cutting-edge large language model. This session explores its capabilities for coding and game dev, specifically referencing the "Renaissance / Plan Final Fantasy Tactics RPG Game" project. Discover how this ai model performs under pressure and its potential impact on game design workflows.🔴 LIVE at 9:30AM PT / 12:30PM ETAnthropic just dropped Claude Opus 4.7, and we’re putting it through the gauntlet in real time.Join Grant Harvey (Lead Writer at The Neuron) for an unscripted, warts-and-all test of Anthropic’s newest flagship model.What we’re testing- Advanced coding on tasks Opus 4.6 struggled with- New higher-resolution vision support for images up to ~3.75 megapixels- File system-based memory across multi-session work- The new xhigh effort level, which sits between high and max- Claude Code’s new /ultrareview slash command- Auto mode for longer, less-interrupted agent runsWhy this mattersOpus 4.7 is the first model Anthropic is releasing with its new automatic cyber safeguards, following last week’s Project Glasswing announcement.It’s also the direct upgrade path from Opus 4.6 at the same price:- $5 per million input tokens- $25 per million output tokensIf you build on Claude, this is likely the model you’ll be using next.What’s changing under the hood- New tokenizer, where the same input can map to more tokens depending on content type, roughly 1.0x to 1.35x- State-of-the-art score on GDPval-AA, a third-party evaluation of economically valuable knowledge work- Better instruction following, which means prompts written for earlier models may now behave differently- Improvements across finance agent evals, document reasoning, and long-context tasksBring your hardest prompts. We’ll run them live and show you what breaks, what shines, and whether it’s worth migrating today.Watch part two, where Grant covers Codex for (almost) anything: https://youtube.com/live/OiRkwm3-og0📰 Full writeup in tomorrow’s newsletter: 🐱 Subscribe to The Neuron (700K+ readers): https://www.theneuron.ai
play-circle icon
61 MIN