<p>What if the biggest problem in AI product design isn’t the model but the data? </p><p>On this episode of The Tech Girl Podcast, host Gertrude Chilufya sits down with diversity strategist and Ngoma founder Torin Ellis to explore why building trustworthy AI starts with centering the lived experiences of people with disabilities.</p><p>Torin shares how his background in DEI and HR analytics led him to a powerful realization: AI trust is a data problem before it’s a model problem. Together, they unpack how exclusion from training data creates blind spots in AI systems and why the disability community is often overlooked despite being one of the most marginalized groups in the workforce.</p><p>From LLM hallucinations and abstention failures to the risks AI poses in hiring, healthcare, finance, and autonomous vehicles, this conversation challenges organizations to rethink how they design and deploy technology. </p><p>Torin breaks down why synthesized data isn’t a silver bullet, how prompt specificity and chain-of-thought guidance impacts outcomes, and why companies must ask a simple but critical question: “Who’s missing?”</p><p>This episode reframes inclusion not just as a moral imperative, but also as fiduciary, legal, and reputational risk management in the age of AI.</p><p>Connect with Torin Ellis:</p><p>Website: www.ngoma.io</p><p>LinkedIn: https://www.linkedin.com/in/torinellis/</p><p>Podcast Produced by:LeTroy Gardner &amp; Mario Washington | Fourcast Media | pushplaypods.com</p><p><br></p><p>#TechGirlPodcast #ArtificialIntelligence #AIethics #DisabilityInTech #AccessibilityMatters #ResponsibleAI #FutureOfWork #DiversityInTech #AITrust #TechPodcast<br></p>

The Tech Girl

Gertrude Chilufya

How to center accessibility & human experience in AI product design with Torin Ellis

APR 9, 202657 MIN
The Tech Girl

How to center accessibility & human experience in AI product design with Torin Ellis

APR 9, 202657 MIN

Description

<p>What if the biggest problem in AI product design isn’t the model but the data? </p><p>On this episode of The Tech Girl Podcast, host Gertrude Chilufya sits down with diversity strategist and Ngoma founder Torin Ellis to explore why building trustworthy AI starts with centering the lived experiences of people with disabilities.</p><p>Torin shares how his background in DEI and HR analytics led him to a powerful realization: AI trust is a data problem before it’s a model problem. Together, they unpack how exclusion from training data creates blind spots in AI systems and why the disability community is often overlooked despite being one of the most marginalized groups in the workforce.</p><p>From LLM hallucinations and abstention failures to the risks AI poses in hiring, healthcare, finance, and autonomous vehicles, this conversation challenges organizations to rethink how they design and deploy technology. </p><p>Torin breaks down why synthesized data isn’t a silver bullet, how prompt specificity and chain-of-thought guidance impacts outcomes, and why companies must ask a simple but critical question: “Who’s missing?”</p><p>This episode reframes inclusion not just as a moral imperative, but also as fiduciary, legal, and reputational risk management in the age of AI.</p><p>Connect with Torin Ellis:</p><p>Website: www.ngoma.io</p><p>LinkedIn: https://www.linkedin.com/in/torinellis/</p><p>Podcast Produced by:LeTroy Gardner &amp; Mario Washington | Fourcast Media | pushplaypods.com</p><p><br></p><p>#TechGirlPodcast #ArtificialIntelligence #AIethics #DisabilityInTech #AccessibilityMatters #ResponsibleAI #FutureOfWork #DiversityInTech #AITrust #TechPodcast<br></p>