What does Privacy and Transparency Mean Anyway?
This week's Health Data Ethics podcast continues our series on the Joint Commission and CHAI guidance on the responsible use of health AI. In this episode we're digging into privacy and transparency.The guidance itself is reasonable. What I spent most of the episode on is how you actually implement it, because that's where things get interesting.Adding AI language to the Notice of Privacy Practices is a good first step, and a lot of health systems are doing it. But I think the most-told lie in modern life is still "I have read and agreed to the terms and conditions." Broad disclosure is honest, and it matters, and it's also not going to carry the whole weight of a transparent relationship with your patients.The piece I really wanted to dig into is opt-outs. If you offer patients the ability to opt out of something you can't actually turn off, you've built opt-out theater, and that erodes trust faster than just being honest about the limitation would. Ambulatory scribe is a real opt-out. Inpatient sepsis prediction is not technically feasible to opt out of, and we probably shouldn't pretend it is.I also spend some time on the clinician side, which I think gets short shrift in a lot of these conversations. Operational training on a tool is not the same thing as understanding how the model behaves, where it fails, and which patients it might be wrong for. Clinicians are the ones carrying accountability for human-in-the-loop judgment, and they need real explainability to do that well.