ChatGPT computer screen

Artificial intelligence has changed the landscape of healthcare.

While some uses of AI are halting and tentative, these tools in the hands of patients seeking information, and willing to take a trial-and-error approach, have had remarkable successes, giving rise to new and unforeseen approaches. Picture this:

  • One man who couldn’t get a doctor’s appointment for his aging father, who had a nonstop itching rash, used AI to research the symptoms and found a suggested diagnosis and an approach to treatment that was straightforward and simple and low-risk — and his father’s symptoms eased before they could see the doctor.
  • One woman’s son had three years of visits with 17 doctors for chronic pain. She finally fed his symptoms and history into ChatGPT, and had insights within seconds to his ailment — so she joined an online support group and then took her notes to a new doctor, who confirmed the diagnosis of a rare syndrome.
  • One woman had suffered days of pain and an ineffective emergency room visit. She used ChatGPT to document her history (teeth cleaning) and ask for ideas. ChatGPT correctly diagnosed her as having Bell’s palsy, with the cleaning causing a re-activation of the shingles virus. She used that knowledge at a second emergency room visit to get the correct treatment — inside of the window where the condition was still treatable.

Momentous changes

The changes are momentous, as described by one enthusiastic user of ChatGPT for medical purposes, Dave deBronkart, known as e-Patient Dave, a top patient empowerment advocate and frequent speaker at healthcare conferences.

“Patient empowerment could be the most important long-term impact of AI,” he said in a recent podcast about the leaps forward with AI.

He expanded on this in a phone interview. “A really important thing is happening here, which is a significant shift in how much chance a random patient has of doing something really meaningful by themselves,” deBronkart said. “What is it that keeps you and me from doing what our favorite doctor can do? Part of it is just plain access to boatloads of information — data, medical research. And another part of it is having seen thousands of patients — knowing, how likely is it that symptom X is going to be related to problem J or something?

“Googling gave us access to information. AI gives us access to clinical reasoning. Clinical reasoning is the term that doctors can relate to, because it involves thinking out, ‘what does this probably mean, based on what we know about clinical experience?’ And making a big leap in what people can do by themselves.

“A big, big issue in the U.S., and in other parts of the world, actually, is people who want an appointment and can’t get one,” he added, pointing to one of the most common use-cases we have seen.

Now, he said, a person can say to ChatGPT: “Here are my symptoms. What might this be? What should I do for it while I wait to see the doctor? That’s that is the enormous shift in in what is possible. It’s really heartwarming to hear miscellaneous reports from non-scientific nerd people who got a good, reliable answer from ChatGPT and were able to do something about it while they waited to see the doctor.”

How it works

By this time, we all know how it works: A Large Language Model (LLM) like ChatGPT is trained on a large body of content — articles, data, computer code, anything else its creators can feed it. And then, in response to a question (called a prompt) from a user, it sends back an answer by predicting the most likely word or words to be used in an answer.

For example, if you typed in “I have fever, chills, a headache and other body aches — what could be wrong with me?” the LLM might answer “influenza.” (There are many other uses for as well, of course, but this is the most common patient use.)

We also all know that these tools are prone to “hallucinations” — the name for the sometimes loopy answers about putting glue on a pizza, eating rocks and making up citations in scientific papers.

But for patients, and especially expert patients and those with rare conditions and diseases, AI has turned out to be a game-changer. This is especially true as the U.S. healthcare system goes through a series of convulsions driven by the Covid pandemic, the increasingly obvious shortage of doctors, and a crisis in public health caused by under-investment and political calculus by the Trump administration.

Patient empowerment

DeBronkart is not new to patient empowerment. In January 2007, he was told suddenly that he had stage 4 kidney cancer, and 24 months to live. On the advice of his doctor, he joined a patient community where he found information that had not yet been published and thus was not available to many doctors. He found advice on the treatment that he took to his doctors, and that would eventually save his life — and on measures to grapple with the side effects of the treatment. From that experience, he began to teach others how to take control of their health when they felt helpless — the patient empowerment that can be so hard to find in the throes of a health crisis.

He described the experience of Hugo Campos, the man who is caregiver to his 90-something father, who had a fierce itching rash that was making him miserable. With the nearest doctor appointment three months out, Campos took pictures of the rash and copy-pasted lab results and history notes into two LLM’s — and then did some expert things.

He told the LLM’s what role to play (you are a clinician, or you are a dermatology fellow) and described the case. Then he got specific: “First, read through the clinical notes uploaded to your knowledge base. Then look at the photos of the patient that show the skin rash. …”

Then he got even more specific, asking for a “differential diagnosis,” the clinical term for “what are the top thre or five things this might be, and why do you think that”:

  • “1. Examine the provided photos uploaded to the knowledge base
  • “2. Consider the differential_diagnosis provided, but keep an open mind to other possibilities not included here.
  • “3. Then create a 4-column table containing (a) ranking with the most likely diagnosis on top, (b) the diagnosis, (c) the reasons and considerations why you gave it this ranking, (d) recommended course of action the patient can take to mitigate the problem.
  • “4. Finally, walk me through your decision-making process and tests like skin scraping for microscopic exam, a skin biopsy, etc, that can help narrow down the diagnosis.

The AI Claude, from Anthropic, said its first diagnosis was xerotic eczema, and it recommended “optimize skin hydration with gentle cleansers, regular hydration” and consider high-potency topical steroid and oral antihistamines for itch control. The total of the results from several LLM’s suggested to “reduce hot showers; tweak the diet for his kidney condition; increase the use of topical creams.” Campos did that and — in 10 days the rash started improving and then disappeared.

What are the lessons?

DeBronkart said Campos “said something that I call ‘Hugo’s Law.’ He said, ‘Dave, I don’t ask it for answers. I use it to help me think.’ “

“So you get an answer and you say, ‘Well, I don’t understand this. What about that? Well, what if I did something different instead?’ And never forget, you can come back three months later and say, ‘By the way, I just thought of something. What about that?'”

Another example: DeBronkart said his wife had a bout of shoulder pain she connected with vertebra surgery a few years ago. Her doctor recommended an MRI, and she was then to go back for a follow-up visit. DeBronkart gave ChatGPT her MRI, and then asked what was going on — and then found an old X-ray from two years ago, and uploaded that report, and asked the LLM to compare them.

“I asked it to also give us a description of what’s changed in the last two years,” he said. “It did a beautiful job. You could see what which vertebrae had worsened, which ones had not. By the time we got into the doctor’s office, first of all, we told him that we used ChatGPT, and so we understood what the situation is, and his face lit up, happily. That meant that our visit was spent on discussing next steps instead of him explaining the case — we moved forward one click on the timeline, because we came in already knowing what the imaging showed.”

Advantages

The AI usages deBronkart and Campos are talking about are used by many. You can ask anything you want. You can ask again. You can add information.

“Have a conversation with it, talk to it the same way you would to a doctor or nurse, and understand that it has unlimited time for all your questions,” deBronkart said. “And very much like riding a bicycle, in the sense that it can’t be described — well, you’ve gotta see what it feels like.

“I had discussions with two people … where I saw their faces light up when they realized that they could use this, not for medical problems in these cases, but they could use it for something that was a constant problem for them. It’s when somebody has a problem, a recurring pain in the neck that they are that they realize, oh, this could be useful.”

What about hallucinations?

But really, what about the cases when AI makes something up — the hallucination problem? Well, you want to check everything AI tells you. Use common sense. Don’t put glue on the pizza.

“Before you take any important action, you double-check,” he said, often with a doctor if it’s medical information. But in any case, “you can take the conclusion, open up a new chat, and go back in and ask it, is this true? And 99 times out of 100, if it’s a hallucination, it won’t have the same hallucination again. It’ll say, ‘No, that’s not true.'”

You can also phrase the question differently, he said, and you might get a different answer. “Don’t use it to ask for one-shot answer,” he said. “Use it to help yourself. And before taking any important action, get a second opinion from an actual knowledgeable human.”

What about your privacy?

As we all know, anything you say on the internet is never really private.

Most of the LLM’s use their users’ data by default to train the model on real-world questions and answers. This has been true of many tech tools: Think of Facebook selling your data to help advertisers target you. It’s a tech tale as old as time. So if you ask a question about your health, it may live on in the LLM’s memory. Could it be connected with your identity? That is a very good question.

What can you do?

Many of them have a private mode — where you can tell it you don’t want your data to be shared or used for training. All of them are different. It is up to you to decide if you believe they won’t use it, or won’t connect it with your name. There are experts in this, and I am not one. But you need to be conscious of this: If you input medical information, your health data is out there, and someone might well be collecting it.

There is semi-good news: “Over the last 18 months, many LLMs have added more transparent data usage policies, options for users to opt out of data collection, and the ability to disable storage of their chat history,” SectionAI.com writes. “For individual users, ChatGPT and Perplexity offer the most transparent and controllable privacy settings. Both platforms provide explicit toggles to enable or disable AI training on your data. While disabling these features might limit access to certain capabilities (like beta features), you get maximum control over your data usage.”

Many of them also have a function that saves your conversation history, so you can return to a topic and add new information or ask a separate question. If you’re in the private mode, it may not save your history.

Not always compliant

With the LLM committing not to share or train on your data — take this with a grain of salt. Tech companies are well known for the “move fast and break things” mode of operation, and they are also known to observe policies and agreements when it is convenient.

For example, the journalist Karen Hao wrote a remarkable book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” with a telling narrative about the origins of OpenAI, including details about what they knew and when they knew it, and what they know about their model’s capabilities and the world of AI at large. She also writes about the inherent bias in the tools, caused by the basic data the model is trained upon, being full of white people. She also writes about the conflicts in the company about insuring safety on all fronts.

The picture is not a pretty one.

There are ways you can make an LLM run locally on your laptop or desktop, which could obviate some of these concerns.

The short answer: Be careful.

Favorite LLM?

Does deBronkart have a favorite LLM? He uses ChatGPT, he said — the one he started with.

“It’s not a matter of my having evaluated them,” he said. “I got started with ChatGPT, I raced forward with it, and have done great, and I haven’t had any need to go to scoring elsewhere. But that’s not advice. That’s not a recommendation to anyone.”

For me, I have been taking a series of courses from the Online News Association on various AI tools. Right now, I’m paying $20 a month for Claude from Anthropic, which I like because it is good at analyzing data. I also have a free trial subscription to Perplexity Pro, which I like because it always cites its sources. I did start with ChatGPT, but backed off when it wasn’t as accurate as I wanted it to be, yet it was still charging $20 a month.

Are doctors and other clinicians offended by patients using AI as they were with patients using Dr. Google?

DeBronkart said on a recent podcast that some doctors or nurses might be offended, but some will be delighted that a patient is taking initiative. And of course, check any LLM advice with a doctor or other medical professional.

Empowerment steps

Gilles Frydman, another leading patient empowerment figure, placed the patient advances with AI along a continuum of empowerment that stretches back to the Association of Cancer Online Resources (ACOR), the online network he founded in 1995, which grew to a network of over 200 online communities for cancer patients sharing knowledge and resources.

In a post for deBronkart’s PatientsUseAI Substack, Frydman noted the rise of “participatory medicine,” which arrived when engaged patients declined to submit totally to paternalistic, doctor-driven care. The Society for Participatory Medicine has focused on this work. Then, turbocharged by social media, patients took more and more of an active role in their treatment.

The use of AI for patient empowerment is part of that continuum, he wrote in a comment on LinkedIn, putting knowledge and agency in patients’ hands.

Reddit is full of people telling stories about how ChatGPT helped them identify a solution to a vexing medical problem, like this person who found a set of lab results helped him identify a problem his doctor had not observed.

Fundamental changes

Campos wrote on Medium last year about the fundamental changes in the doctor-patient relationship that are coming.

“As we’ve seen, the doctor-patient relationship is undergoing significant changes due to technological advancements, systemic pressures, and cultural shifts in the healthcare landscape. However, the most profound transformation is now at our doorstep,” he wrote.

“The rise of AI in healthcare is not just another step in this evolution — it represents a paradigm shift that will entirely redefine how we approach and receive care. This final stage of healthcare evolution will lead us to a future where patients are largely autonomous, interacting entirely with AI systems and guided by their decisions and recommendations — the paths of doctors and patients no longer intertwined.

“I have long championed patient autonomy as the ultimate form of empowerment. AI will continue to enable patient independence by breaking down barriers like information asymmetry, limited access to resources, and health literacy. While this separation between patients and doctors may seem disheartening at first, it ironically presents the opportunity to bring about a future with greater health equity and patient-centeredness.”

What’s next? AI agents, capable of making lists of tasks and then checking their work. So instead of a human prompting the AI, the AI agent prompts the AI.

What you can do

DeBronkart’s PatientsUseAI Substack is the best collection of writing I’ve seen about how patients use AI.

Are there reliable handbooks for learning how to use AI this way? DeBronkart said he had not seen one recently that was meaningful for him, perhaps because he is a fluent user.

In addition to the above suggestions, here are some thoughts from Frydman, who is also an expert in patient use of AI, in a comment series on LinkedIn.

“*Think of each prompt as an experiment, not a one-shot: Don’t chase the ‘perfect’ prompt — try different ways of asking and change small things on purpose. You’ll see how the AI responds and learn what it’s really good (or bad) at, giving you more control.

“Tip: Approach prompt writing like you are testing ideas. Every variation teaches you something valuable.

“*Review AI outputs like a skilled critic: Question everything. Ask: Does this make sense? What proof is there? Could it be unfair? What’s left out? That’s how you turn AI into a real thinking partner — and get smarter yourself along the way.

“Tip: Treat AI responses as ‘first drafts’ — polish them by questioning, improving, and digging deeper.

“*Explore widely before you commit narrowly: Ask the AI to show different ways to look at the problem and spot things you might be missing. This helps you avoid jumping to quick answers and leads to smarter, more creative ideas.

“Tip: Early exploration with AI is like building a map — the clearer your map, the smarter your journey.

Many people have observed that ChatGPT and other consumer LLM’s have a tendency to agree with the person who is asking questions. Look out for this — challenge the LLM if it seems to be agreeing too much. (See Ryan Broderick of Garbage Day talking about his use of ChatGPT for therapy and his discovery of this tendency: “ChatGPT’s default is to agree with you.”)

To get a differential diagnosis, whatever answer you get, as Campos did, you can try this: “What is your differential diagnosis? Please explain your reasoning — what specifically makes you think that? Format your answer in a table, if you like.”

Be careful about where you spread your health information around on the internet. You never know who’s listening or where it might come back to you. Consider using privacy features, and check policies to see if your information is being used to train an LLM.

The experts all recommend that there should be a human in the loop: Any AI results should be checked and cross-checked.

“The AI Revolution in Medicine: GPT-4 and Beyond,” a book by Peter Lee, Carey Goldberg and Isaac Kohane, written as they got early access to GPT-4, is very instructive. It was published in May 2023, and made a lot of predictions about how AI in medicine would develop. Perhaps a better contemporary use of your time: Lee followed up this spring with a Microsoft Research podcast examining how their original assumptions played out.

The journalist Karen Hao wrote a remarkable book, “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI,” with a telling narrative about the origins of OpenAI. Read it.

Lee said in his podcast about AI and medicine: “Hardly a week goes by without a news story about an ordinary person who managed to address their health problems — maybe even save their lives or the lives of their loved ones, including in some cases their pets—through the use of a generative AI system like ChatGPT. And if it’s not doing something as dramatic as getting a second opinion on a severe medical diagnosis, the empowerment that people feel when an AI can help decode an indecipherable medical bill or report or get advice on what to ask a doctor, well, those things are both meaningful and a daily reality in today’s AI world.”

Jeanne Pinder  is the founder and CEO of ClearHealthCosts. She worked at The New York Times for almost 25 years as a reporter, editor and human resources executive, then volunteered for a buyout and founded...