HockeyStick Show
HockeyStick Show

HockeyStick Show

Miko Pawlikowski

Overview
Episodes

Details

Steal breakthrough ideas in tech, business & performance from world-class experts www.hockeystick.show

Recent Episodes

Inside OpenAI: the Future of Deep Learning, with Richard Heimann - HockeyStick #53
FEB 21, 2026
Inside OpenAI: the Future of Deep Learning, with Richard Heimann - HockeyStick #53
<p>Welcome to Episode 53 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Richard Heimann, Director of AI for the State of South Carolina and author of “Sutskever’s List”, to talk about the papers that built modern AI, the man behind OpenAI’s biggest breakthroughs, and what happens when living doubts become explosive decisions.</p><p>Richard walked me through Ilya Sutskever’s legendary reading list: 27 papers that supposedly explain 90% of what’s happening in artificial intelligence, and why understanding this curated canon matters more than drowning in the weekly flood of new research. The conversation moved fluidly between deep learning history, the Sam Altman firing saga, bubble economics, and the challenge of separating genuine progress from AGI fever dreams.</p><p><strong>The Reading List That Became a Book</strong></p><p>We started by exploring how a simple recommendation from Ilya to John Carmack turned into a full book project. When Ilya shared his reading list in 2021 or 2022, he made a promise: read these papers and you’ll understand 90% of what’s going on in AI.</p><p>Manning Publications initially wanted an anthology: 27 chapters analyzing each paper in isolation. Richard pushed back. The papers weren’t just standalone artifacts; they built on each other and told a larger human story. Ilya’s story. The publisher agreed, and Richard spent the last year weaving the technical breakthroughs into a narrative that makes sense for people who aren’t writing these papers themselves.</p><p>The book is done. The final chapters just went up on Manning’s early access program. Print release is scheduled for May 2025.</p><p><strong>Who Is Ilya Sutskever and Why Should We Care?</strong></p><p>For those who only know Ilya from the Sam Altman firing drama, Richard provided crucial context. This is the person responsible for AlexNet in 2012: the moment that launched the modern deep learning era. He’s behind Word2Vec, sequence-to-sequence models, and the scaling of transformers at OpenAI. GPT-1, 2, 3, and beyond.</p><p>But beyond the technical contributions, Ilya has this mystique. He doesn’t say much. When he does, it’s high signal. And his work has consistently centered on safety concerns, which makes him both a technical innovator and someone genuinely worried about the implications.</p><p>The reading list reflects his mental model. It gives insight into what he sees, what he values, and why he makes the decisions he makes.</p><p><strong>The Sam Altman Firing: Living Doubts Gone Wrong</strong></p><p>We spent significant time unpacking the OpenAI board saga. Richard’s take was fascinating: he traced it back to GPT-2 in 2019, when OpenAI deemed the model “too dangerous to release” and staged its rollout over nine months.</p><p>At the time, researchers were skeptical. It looked like hype-building. But Richard sees it differently now: it was a living doubt. Ilya and OpenAI acted on their safety concerns in a transparent, reversible way. They could always say “we were wrong” and release the full model, which they eventually did.</p><p>The Sam Altman firing was different. It was explosive, irreversible, and impossible to unwind once initiated. The lesson from a safety perspective: whatever your doubts are, structure them so you can reverse course if you’re wrong.</p><p><strong>Bubble Economics and the Free Lunch Era</strong></p><p>I asked the question everyone wants answered: are we in an AI bubble?</p><p>Richard’s response was nuanced. Yes, it’s bubbly. But bubbles aren’t inherently bad. Nothing important happens without bubbles. You don’t get this kind of capital, talent, and momentum from purely rational actors making measured bets.</p><p>The key difference from 2008: there’s real underlying technology here. It’s more like the dot-com bubble: bad ideas will get flushed out, valuations will correct, but the fundamental shift is genuine.</p><p>What’s remarkable isn’t the diminishing returns everyone’s complaining about. It’s that scaling worked at all. For 50-60 years, AI progress required genuine innovation: new architectures, new training tricks. For the last five years, we just made models bigger and threw more data at them. That free lunch was unprecedented.</p><p>Now the free lunch is ending. Ilya himself recently said the era of scaling is over. We’re going to need good ideas again.</p><p><strong>AGI: Paper Hopes vs. Living Technology</strong></p><p>Richard was refreshingly direct about AGI hype. He doesn’t find the concept appealing. It’s a paper hope: something people talk about but don’t actually build toward in meaningful ways.</p><p>The substrate we’re working with isn’t going to produce human-like intelligence. And we don’t need it to. The technology is already powerful and will continue improving linearly. But the exponential curves and S-curves are done. We’re hitting asymptotes.</p><p>The implication: a lot of the AI safety concerns about alignment and existential risk become less urgent. He doesn’t see an existential threat from his computer.</p><p><strong>What’s Underrated and Overrated</strong></p><p>I asked Richard what people are sleeping on and what’s empty hype.</p><p>Overrated: AGI and the entire AI safety research agenda focused on existential risk.</p><p>Underrated: The technology itself, at least among skeptics. Too many people dismiss these models as “stochastic parrots” or “just databases” without understanding what they actually are. The technology will be pervasive in five to ten years, and the skeptics are needlessly rounding down.</p><p><strong>Working in Government AI</strong></p><p>We also covered Richard’s day job: Director of AI for South Carolina. He evaluates use cases from 80+ state agencies, all interested in adopting AI. Some have clear ideas, others need help defining their approach.</p><p>About 80% is advisory: looking at use cases from technical, governance, privacy, and security perspectives. The remaining 20% is an informal accelerator developing strategic use cases in-house.</p><p>The scale is what attracts him. Even in a small state of 5 million people, the potential impact is enormous.</p><p>At its core, this episode was about understanding foundations in a field that rewards chasing novelty. How to build mental models that persist beyond the next model release. How to act on doubts without making irreversible mistakes. And what it takes to write a book that captures not just the papers, but the worldview behind them.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://www.hockeystick.show?utm_medium=podcast&#38;utm_campaign=CTA_1">www.hockeystick.show</a>
play-circle icon
34 MIN
Exploring GenAI, with Maggie Engler & Numa Dhamani - HockeyStick #52
JAN 19, 2026
Exploring GenAI, with Maggie Engler & Numa Dhamani - HockeyStick #52
<p>Welcome to Episode 52 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Maggie Engler and Numa Dhamani, co-authors of “Introduction to Generative AI (Second Edition)”, to talk about navigating the AI landscape without getting swept up in hype, fear, or misinformation.</p><p>Maggie and Numa shared what it’s like to write a technical book in a field moving so fast that a second edition became necessary just a year after the first. The conversation moved fluidly between AI agents, copyright battles, bubble economics, and the challenge of staying grounded when headlines scream about both utopia and apocalypse.</p><p><strong>When Your Book Needs an Update Before the Ink Dries</strong></p><p>We started by exploring why a second edition was needed so quickly. The answer wasn’t just new models or better benchmarks—it was a fundamental shift in how people think about and use generative AI.</p><p>When the first edition came out, people were still asking “What is generative AI?” By the time they started the second edition, the question had become “How do I actually use this in my daily work?” The technology moved from experiment to infrastructure in less than two years.</p><p>Maggie and Numa described the challenge of writing about a field where specific results and capabilities change weekly. Their solution: focus on teaching people how to interpret new developments rather than chasing the latest numbers.</p><p><strong>Agents: Promise, Limitations, and Reality</strong></p><p>We spent significant time on AI agents—one of the biggest additions to the second edition. The conversation was refreshingly balanced. No wild predictions about fully automated workflows next quarter. No dismissive skepticism either.</p><p>They explained how agents show real promise in constrained domains like coding, where you can verify results against tests. Tool use capabilities have improved. Infrastructure like Anthropic’s Model Context Protocol is maturing. But we’re still far from the autonomous systems some headlines suggest.</p><p>The key insight: agents work best when you can clearly define success and verify outcomes. The further you get from that, the more human oversight you need.</p><p><strong>The Legal Wild West and Copyright Chaos</strong></p><p>The copyright discussion was particularly interesting. Maggie and Numa didn’t dance around the obvious: large-scale model developers are training on copyrighted material. The question isn’t whether it’s happening—it’s what happens next.</p><p>We talked about the recent SORA controversy, where OpenAI initially told anime studios they could opt out character by character, then reversed course within days. The lawsuits, the settlements, the attempts at licensing frameworks—it’s all still being negotiated in real time.</p><p>Their take: we’re converging on some baseline principles around transparency and accountability, but the intellectual property questions will take much longer to resolve.</p><p><strong>Bubble or Revolution? Yes.</strong></p><p>I asked the question everyone wants answered: are we in an AI bubble?</p><p>Their response was nuanced. Yes, there are bubble characteristics—high valuations, massive investment, limited returns, lots of speculation. But no, the underlying technology isn’t a passing fad. The comparison to the dot-com era felt apt: real value underneath, correction likely, but the fundamental shift is genuine.</p><p>Maggie predicted we’ll see market consolidation and some valuations adjusting. Numa emphasized we’re moving from wild optimism toward more measured metrics and tempered hype. But the core technology will keep evolving, and returns will materialize.</p><p><strong>Starting Points and Practical Advice</strong></p><p>We closed by discussing how people should actually get started with generative AI today. Their advice was simple: just play with the tools. Try Gemini, Claude, ChatGPT. Most have free tiers. Experiment with prompting. See what works for you.</p><p>The hesitation people feel—not knowing the “right” use cases or perfect prompts—is the main barrier. The best way through it is hands-on exploration, not more reading.</p><p>At its core, this episode was about maintaining perspective in a field that rewards extremes. How to stay informed without getting overwhelmed. How to evaluate capabilities honestly without falling into either hype or cynicism. And what it takes to write a book that stays relevant when the field updates faster than publishing cycles allow.</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://www.hockeystick.show?utm_medium=podcast&#38;utm_campaign=CTA_1">www.hockeystick.show</a>
play-circle icon
28 MIN
Become Legendary, with Tommy Breedlove - HockeyStick #51
DEC 13, 2025
Become Legendary, with Tommy Breedlove - HockeyStick #51
<p>Welcome to Episode 51 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with <a target="_blank" href="https://tommybreedlove.com/">Tommy Breedlove</a> (the author of the book “Legendary”) to talk about the long road from survival mode to self-worth, and how money, identity, and purpose get tangled together along the way.</p><p>Tommy shared his personal story, from growing up around addiction and incarceration to building a successful career, losing himself inside of it, and ultimately redefining what “success” actually means. The conversation moved fluidly between money, masculinity, relationships, and the quiet damage caused by chasing external validation.</p><p>Money, Identity, and the Cost of Approval</p><p>Tommy started by unpacking how early trauma and instability shape our relationship with achievement. For him, success became a shield. Money, status, and performance were ways to feel safe, respected, and untouchable.</p><p>He explained how this pattern shows up for many high performers, especially in tech and business. On the surface, things look great. Underneath, there is burnout, resentment, and a constant fear of being exposed. The more approval you chase, the more expensive it becomes to maintain the image.</p><p>When Net Worth Becomes Self-Worth</p><p>We spent time digging into how money quietly becomes a proxy for value. Tommy talked about how easy it is to confuse financial success with identity, and how that mindset erodes relationships, health, and joy over time.</p><p>He challenged the idea that more is ever enough when the underlying wound is unresolved. Without self-respect, success only amplifies insecurity. With it, money becomes a tool instead of a scoreboard.</p><p>Redefining Success on Your Own Terms</p><p>The conversation shifted toward what it actually takes to step off the treadmill. Tommy described slowing down, setting boundaries, and getting honest about what you want rather than what you think you should want.</p><p>That process often involves hard tradeoffs. Letting go of roles, relationships, and expectations that no longer fit. Learning how to say no. Building a life that feels aligned, even if it looks smaller from the outside.</p><p>At its core, this episode was about sustainability at a human level. How to build a career without losing yourself. How to pursue ambition without outsourcing your self-worth. And what success looks like when nobody is watching.</p><p>Thanks for listening!</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://www.hockeystick.show?utm_medium=podcast&#38;utm_campaign=CTA_1">www.hockeystick.show</a>
play-circle icon
34 MIN
From FerretDB to Percona, with Peter Farkas - HockeyStick #50
NOV 29, 2025
From FerretDB to Percona, with Peter Farkas - HockeyStick #50
<p>Welcome to Episode 50 of The HockeyStick Show. I’m Miko Pawlikowski, and this week I sat down with Peter Farkas to dig into the messy reality of modern infrastructure, open source licensing, and what really happens when companies try to protect their products from hyperscalers.</p><p>We walked through his recent LinkedIn post, the story behind it, the unintended consequences of “defensive licensing,” and what the future might look like for teams trying to build sustainable businesses on top of open source.</p><p>Cloud Providers, Open Source, and the Licensing Squeeze</p><p>Peter started by explaining the background behind his post: why companies shift to restrictive licenses like SSPL, what they’re trying to defend against, and why it often snowballs into confusion for both users and vendors.</p><p>He shared examples of how cloud providers respond, how this changes the economics of running a service, and why certain licensing decisions end up punishing the wrong people. The conversation unraveled into a broader point about how blurry the line has become between infrastructure, managed services, and full-blown products.</p><p>Why “Open Source Alternatives” Aren’t Always What They Seem</p><p>We also talked about the wave of drop-in replacements and forks that appear every time a company tightens its license. Peter explained the real costs behind “just run it yourself,” the pressure it puts on engineering teams, and why some of these forks still depend heavily on the original maintainers.</p><p>Underneath it all is a bigger question: who actually pays for the innovation that everyone wants to remain free?</p><p>The Realities of Building a Business Around Infrastructure</p><p>Peter broke down the challenges of turning infrastructure into a viable product: operational burden, attack surfaces, compatibility expectations, and the never-ending stream of breaking changes that users don’t see.</p><p>The theme kept coming back to sustainability. What does fair monetization look like? How do you protect your company without alienating your community? And what options do founders realistically have when cloud giants can replicate their service within months?</p><p>Thanks for listening!</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://www.hockeystick.show?utm_medium=podcast&#38;utm_campaign=CTA_1">www.hockeystick.show</a>
play-circle icon
39 MIN
Building Better Platforms, with Ajay Chankramath, Sean Alvarez & Nic Cheneweth - HockeyStick #49
NOV 15, 2025
Building Better Platforms, with Ajay Chankramath, Sean Alvarez & Nic Cheneweth - HockeyStick #49
<p>Welcome to Episode 49 of The HockeyStick Show! I’m Miko Pawlikowski, and this week I sat down with three platform leaders who’ve lived through the messy, unglamorous reality of building internal platforms that actually help teams ship better software: Ajay Chankramath, Sean Alvarez, and Nic Cheneweth.</p><p>We unpacked what platforms really are, why they’re misunderstood, and how good platform work is far more human than technical.</p><p><strong>Platforms Aren’t Magic — They’re Just Good Engineering Done at Scale</strong></p><p>All three guests pointed out a simple truth: most companies don’t need fancy platform branding, they just need to fix the basics. Shared tooling, stable environments, repeatable patterns — the “boring stuff” is what creates real leverage.</p><p>A platform isn’t a product you install. It’s a consistent way of working that reduces chaos and duplication.</p><p>Lesson: A platform is not the shiny thing — it’s the reliable thing.Action: Identify one repeated pain your teams face and solve it once, centrally.</p><p><strong>Internal Customers Matter More Than Internal Technology</strong></p><p>A theme that came up repeatedly: platform work only succeeds when the platform team treats engineers as customers, not as people who should “just use what we built.”</p><p>Ajay talked about how teams often skip discovery and jump straight into building. Sean emphasized empathy. Nic highlighted that many “platform failures” are really product failures — misaligned expectations, poor communication, and unclear value.</p><p>Lesson: If no one is using your platform, it’s not a platform — it’s shelfware.Action: Before building anything new, interview five developers about what they actually need.</p><p><strong>Reduce Cognitive Load, Don’t Add to It</strong></p><p>Every engineer knows the pain of juggling too many deployment paths, tooling options, and config formats. A good platform reduces cognitive load by removing decisions that shouldn’t matter.</p><p>This isn’t about limiting freedom. It’s about letting teams spend their energy on product, not plumbing.</p><p>Lesson: The best platform decisions remove decisions.Action: Pick one workflow today that your team repeats and standardize it.</p><p><strong>Developer Experience Is a Business Metric</strong></p><p>Nic made a point that stuck with me: no executive wakes up excited about “platform engineering.” They care about throughput, reliability, cost, and time-to-market. A platform only earns its place when it moves those numbers.</p><p>You don’t justify platform work with architecture diagrams. You justify it by showing how much faster teams deliver because of it.</p><p>Lesson: If you want executive support, speak the language of outcomes.Action: Track one metric affected by platform friction — and show the before and after.</p><p><strong>Platforms Fail When They Become Mandates Instead of Choices</strong></p><p>Sean raised this repeatedly: forcing a platform onto teams rarely works. The healthiest platforms are opt-in, because they’re useful enough that teams choose them.</p><p>Mandates hide problems. Adoption exposes them.</p><p>Lesson: If you have to force adoption, the real issue isn’t adoption — it’s value.Action: Ask a team why they didn’t choose your platform. Their answer is your roadmap.</p><p><strong>Culture Makes or Breaks the Platform</strong></p><p>Ajay described how teams often treat platform issues as technical problems, when they’re usually cultural ones: trust, communication, ownership, and the willingness to collaborate across team boundaries.</p><p>The best platforms grow in environments where experimentation is allowed, feedback loops are short, and teams feel safe saying “this isn’t working.”</p><p>Lesson: A platform is a cultural artifact as much as a technical one.Action: Start including platform updates in your engineering ceremonies — make it part of the conversation, not an afterthought.</p><p><strong>A final thought from me</strong></p><p>This conversation reminded me that platforms aren’t about abstraction layers or golden paths or YAML templates. They’re about helping people do their best work without tripping over the infrastructure underneath them.</p><p>If you take one thing from this episode: treat platform engineering as a service, not a structure. Talk to your teams, fix the pain that matters, and keep the human side front and center.</p><p>Thanks for listening!</p> <br/><br/>This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit <a href="https://www.hockeystick.show?utm_medium=podcast&#38;utm_campaign=CTA_1">www.hockeystick.show</a>
play-circle icon
40 MIN