Toronto Star Article On Patients Using “Dr ChatGPT” and What Doctors Need To Know
This article was released today and talks about how the Ontario Medical Association’s lead feels the role of AI in the patient journey should be after their physician encounter, or to support the physician in note-taking in an encounter.
Our take: This isn’t realistic or fair to say, and as a physician community we need to empower and understand that patients will use AI more - not less. There are multiple studies, including this one, that show physicians find an accurate diagnosis 75% of the time, physicians using ChatGPT are correct 76% of the time, and ChatGPT without a doctor is accurate up to 90% of the time. If physicians discredit this fact or ignore it, we risk losing the trust of our patients.
Instead, when and as patients present with a ChatGPT diagnosis, let’s engage and ask them about the prompting they used and knowledge base behind the model. If the patient isn’t a professional AI / ML expert or prompt engineer using a very well tuned model, there’s an opportunity for us to remind the patient that the difference between the human and the AI model is that the human has the benefit of experience, understanding and the ability to ask dynamic questions.
Remember that patients will google and use Ai no matter what they are told or how we comment. The trust in the Physician / Patient relationship comes from the physician understanding the patient and asking the right questions. Ai doesn’t have your experience and can’t always ask the questions or context needed the way a physician can.