Future Around & Find Out
Future Around & Find Out

Future Around & Find Out

Dan Blumberg

Overview
Episodes

Details

* Winner of the 2026 Webby Award for Best Technology Podcast * Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. On Tuesdays, we interview the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com

Recent Episodes

How AI Can Make You a Better Writer: Stop Letting It Write; Start Letting It Ask. | Jay Dixit (Socratic AI)
MAY 12, 2026
How AI Can Make You a Better Writer: Stop Letting It Write; Start Letting It Ask. | Jay Dixit (Socratic AI)
Jay Dixit helps writers improve their writing with AI. He doesn't recommend that AI write for you — he hates that — but he says it can be a great partner to pull ideas out and to be there for you when you get stuck and just wanna doom Scroll. Jay headed Open AI's Writing Community and is the founder of Socratic AI.He's a writer and a journalist, and we sat down at South by Southwest to future around and find out. Jay says "We need to be using AI to unlock our humanity — to do the things that we're scared to do."Chapters(00:30) - Stop asking AI to write for you (02:15) - Flip the script and let AI interview you (04:30) - Why the defaults push you toward lazy thinking (06:30) - Using AI at every phase of the writing process (08:00) - Give the AI your criteria, then ask for feedback (09:30) - The dark night of the soul and the 1 a.m. problem (13:15) - The double-edged sword of always-on AI (16:00) - What's catching Jay's eye at SXSW 2026 (17:00) - Why Wikipedia photos are so bad — and how Jay is fixing it (20:30) - AI as a photography coach (23:30) - How to stand out in a sea of AI slop (26:56) - What George Carlin would make of this moment (28:56) - The text Jay was avoiding sending his dad (31:26) - Using AI to unlock your humanity Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!---Music by Jonathan Zalben
play-circle icon
33 MIN
Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)
MAY 5, 2026
Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)
Catie Cuan's dad was in the hospital, surrounded by machines that were supposed to help him. Instead they made him feel alienated and afraid. Catie, a dancer-turned-roboticist, realized it's not enough for a machine to do its job — it has to be relatable, too. Today she's the founder and CEO of ART Lab, focused on what she calls the "interaction gap" between what a robot can do and how it makes us feel. Catie danced at the Metropolitan Opera Ballet and ran her own dance company before getting her PhD at Stanford and becoming an artist-in-residence at Google X, where she worked on the Everyday Robots moonshot — including teaching office robots that it's rude to cut between two people having a conversation. Now ART Lab is building a home robot that won't look anything like a robot, plus a new kind of AI model that conditions success on how the human in the room responds, not just whether the task got done. Listen for the case against humanoids, why the future of AI shouldn't live inside your phone, and a sneak peek at what our life with robots might look like.Chapters:(02:11) - “There will be billions of robots” – from dishwashers to elder care (04:45) - Why robots can be capable and still feel unsettling (08:00) - How robots could read your reactions and respond in real time (11:45) - What shape should robots take? (15:30) - The case against humanoids (19:00) - A nine foot robot hand and the wild future robot design could take (23:15) - What it's like to dance with robots (28:30) - “The robot just died” – when a live failure changed the whole performance (32:45) - Friendship loneliness and home robots (and why builders need to be clear about the future they are creating) (37:11) - Why the home may become robotics’ biggest use case (and what ART Lab is building) (40:06) - Robot tutors, homework help, and why teachers still matter most (43:51) - “We have a tremendous amount of agency” – choosing the future we build now (46:16) - Why inequality and access worry Catie most (and who gets left behind) (48:56) - Why builders need to get outside their own bubble Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!
play-circle icon
51 MIN
The Goblin in the Machine | FAFO Friday
MAY 2, 2026
The Goblin in the Machine | FAFO Friday
I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant." Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans. Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space. Links:Where the goblins came from (OpenAI blog post)My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)GitHub Copilot is moving to usage-based billing (GitHub announcement)‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link) Meta inks deal for solar power at night, beamed from space (TechCrunch)Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!
play-circle icon
35 MIN
AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"
APR 28, 2026
AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"
Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are. Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.” Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.In this conversation:Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans madeHow to avoid — or at least how to mitigate — creating AI that’s biasedRed teaming AI and creating bias bountiesThe "grandma hack" and other ways regular people accidentally jailbreak AI modelsHow AI companies are quietly rewriting their terms of service to dodge liability when things go wrongWhy the benchmarks you see when a new model drops are "basically spelling tests"AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is aliveWhat builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came beforeChapters:(00:00) - "The thing I believe in the most is human agency" (02:14) - Why builders have more agency than they realize (04:00) - What is a bias bounty? (06:41) - What 2,000 hackers at DEF CON found (09:40) - The grandma hack (11:30) - Why guardrails fall apart (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game (19:10) - Why most evals are "basically spelling tests" (21:30) - How to actually evaluate an AI agent (27:16) - "Moral outsourcing" and the AI layoff lie (29:41) - Inside Rumman's tenure as U.S. AI Science Envoy (33:06) - The legal loophole AI companies use to dodge liability (36:31) - AI psychosis and the cold emails Rumman gets (39:36) - Why Google's AI overview is quietly dangerous (45:31) - The problem with "AI literacy" (49:01) - Can we trust anything we see anymore? (51:11) - What builders can do right now to take back agency Support Future Around & Find OutFollow Dan on LinkedInGet the free newsletterBecome a paid subscriber and help future proof FAFO!
play-circle icon
55 MIN