<p>Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.</p><p>Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks. </p><p><br></p><p>Dan Shipper talked with Bodnia for AI &amp; I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.</p><p><br></p><p>If you found this episode interesting, please like, subscribe, comment, and share!</p><p><br></p><p>Head to http://granola.ai/every and get 3 months free with the code EVERY</p><p><br></p><p>To hear more from Dan Shipper:</p><p>Subscribe to Every: https://every.to/subscribe </p><p>Follow him on X: https://twitter.com/danshipper </p><p><br></p><p>Timestamps: </p><p>00:00:51 - Introduction</p><p>00:02:09 - Why correctness and verifiability matter in AI</p><p>00:09:33 - What an energy-based model is</p><p>00:14:21 - How EBMs construct energy landscapes to understand data</p><p>00:19:00 - Why modeling intelligence through language alone is a flawed approach</p><p>00:26:54 - What it means for a model to "understand" data</p><p>00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code</p><p>00:43:21 - Why LLM progress is plateauing</p><p>00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap</p>

AI & I

Dan Shipper

The AI Model Built for What LLMs Can't Do

APR 15, 202653 MIN
AI & I

The AI Model Built for What LLMs Can't Do

APR 15, 202653 MIN

Description

Most AI companies are racing to build bigger LLMs. Eve Bodnia thinks that's the wrong approach.Eve is the founder and CEO of Logical Intelligence, which is developing an alternative to the transformer-based models dominating the industry. Her argument: LLMs’ architecture makes them fundamentally unsuited for some mission-critical tasks. A system that generates output one token at a time, with no ability to inspect its own reasoning mid-process or guarantee its results, shouldn't be trusted to design chips, analyze financial data, or even fly a plane. Her alternative is the energy-based model (EBM), a form of AI rooted in the physics principle of energy minimization, not language prediction. Rather than guessing the next probable word, an EBM maps every possible outcome across a mathematical landscape, where likely states settle into valleys and improbable ones sit on peaks. Dan Shipper talked with Bodnia for AI & I about why she believes LLM progress is plateauing, what it means for AI to actually understand data rather than just pattern-match across it, and how her team is building toward formally verified code generated in plain English—no C++ required.If you found this episode interesting, please like, subscribe, comment, and share!Head to http://granola.ai/every and get 3 months free with the code EVERYTo hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Timestamps: 00:00:51 - Introduction00:02:09 - Why correctness and verifiability matter in AI00:09:33 - What an energy-based model is00:14:21 - How EBMs construct energy landscapes to understand data00:19:00 - Why modeling intelligence through language alone is a flawed approach00:26:54 - What it means for a model to "understand" data00:37:21 - How EBMs solve the vibe coding problem and enable formally verified code00:43:21 - Why LLM progress is plateauing00:49:54 - Mission-critical industries haven't adopted LLMs, and how EBMs could fill that gap