AI is rapidly becoming part of how patients navigate complex illness—but most people are using it without understanding how it actually works, what it’s designed to do, or where it can fail.In this episode of Long Covid, MD, Dr. Zeest Khan is joined by two experts:Dr Leeda Rashid (former FDA physician, digital health) Dr Jennifer Curtin (CEO of RTHM Health) The discussion breaks down: What AI is already doing inside healthcare (and why that’s different from what you use at home) Why tools like ChatGPT, Claude, and Gemini are not medical devices The risks of trusting AI-generated answers for diagnosis and treatment How specialized tools like RTHM Intelligence attempt to address these gaps The privacy trade-offs when sharing your medical data with AI How to use AI safely and effectively as a patient This episode is a practical framework for using AI as a thinking partner—not a decision maker.⏱️ Chapter Markers00:00 – Why patients are turning to AI 03:15 – How AI is actually used in healthcare (FDA perspective) 07:30 – Why ChatGPT and similar tools aren’t medical devices 15:30 – Specialized AI tools vs general AI (RTHM example) 24:50 – Risks: hallucinations, bad advice, and doctor tension 26:30 – Privacy and HIPAA: what happens to your data 38:25 – How to use AI safely in your careSupport the showSubscribe for free written summaries of each episode, resources, and more. LongCovidMD.substack.com/subscribeSupport by donating at BuyMeACoffee