How do you actually work with AI coding tools in production? From breaking down features into AI-friendly tasks? How to choose between agents vs. manual prompting? How to writing code that LLMs understand better? Where do spec-driven workflows fit in?0:00 - Introduction1:09 - What's the WORST thing you can do when adopting AI?4:44 - Experimentation vs. Following Old Mental Models7:06 - Working at Feature Level: Breaking Down AI Tasks10:10 - Cheesy's Workflow: Brainstorming, Stride, and Task Management13:45 - Phil's Approach: Staying in Flow State vs. Using Agents16:01 - The Death of Prompts: Plugins and Tools Take Over18:11 - Context Engineering vs. Prompt Engineering21:00 - Context Window Size: Bigger Isn't Always Better23:48 - Spec-Driven Development → Task Management Tools25:22 - Model Wars: Anthropic vs. Open Source (Qwen, DeepSeek)30:00 - Should You Short Anthropic Stock? (Philosophical Discussion)33:00 - Why Claude Code Still Leads Despite Model Convergence35:01 - Hardware Costs and the Future of AI Accessibility38:11 - Does Boilerplate Death Change Architecture?42:00 - When Should You Care About Code Organization with AI?45:26 - Writing Code FOR LLMs: Semantic JavaScript and Context47:49 - Wrap Up: Future Topics on LLM-Friendly CodeYouTube: https://bit.ly/3Xfv2bpApple Podcasts: https://apple.co/4bNrAJKSpotify Podcasts: https://spoti.fi/4bZjtcALinkedIn Group: https://bit.ly/3wZIWDMRSS Feed: https://bit.ly/3KsaODWTwitter: https://bit.ly/4ecWHju