Artificial intelligence notetakers for therapists have been found to insert mistakes into the therapy notes they produce, creating false narratives and introducing mention of suicidal thoughts or sex abuse where none was mentioned in the session, potentially risking harm for patients and therapists, therapists and experts said.
In one forum for therapists who work on the online therapy platform Alma, a therapist wrote that Alma’s AI note program Note Assist made a progress note that “had very incorrect information about the client. The note indicated past child sexual abuse and a medical condition, neither of which this client has ever experienced,” according to the commenter on Alma’s internal forum for therapists, in a forum transcript obtained by ClearHealthCosts.
Another wrote “the note was inaccurate, portrayed my client as a child abuser, and did not stick to the principle of ‘the minimum necessary.'”
Another wrote, “My notes are coming back with things I havent talked about with my client, its adding that my clients has substance abuse and suicidal issues when they do not and sometimes it says no interventions were reviewed.”
Another wrote, “When I looked over the note, I saw extra information had been added beneath my usual treatment plan notes. This was stuff like “goal #1 not addressed in-session, goal #2 not addressed in session,” and so on for each of the client’s treatment plan goals.”
Increasingly common
All of these were reported by users of Note Assist, the AI-assisted notes service offered by Alma, the therapy platform that signs up therapists, promising to relieve them of some of the bothersome parts of running a business — credentialing with insurers, record-keeping, payment and other support. Headway, BetterHelp, Grow Therapy and SimplePractice are some similar platforms.
In an official response, Alma and Upheal, its AI vendor for Note Assist, said the problem was noted, but that it was small and limited in scope, and that they had fixed it. (Full statement below.)
The AI note-takers are increasingly common, in some form taking either notes or recordings of sessions and turning them into the documentation that therapists keep for themselves and for insurance companies.
The arguments for and against AI note-takers are many.
In support, many therapists say they find notes-writing time-consuming and some say it takes time away from clients. They also point out that technology marches on: Years ago, there was extensive opposition to the electronic health record, as something that would be insecure; of course, now, most records are electronic.
Against AI note-takers, there’s an assumption by some experts that they are flawed and need to be fact-checked regularly, and that low-tech non-AI notes are more secure.
This is counterbalanced by a wide belief in some circles, encouraged by the AI companies, that AI must be better, since it is, well, Artificial Intelligence, and therefore could be smarter than humans and therefore infallible.
The widely observed “hallucinations” that AI is prone to — the term used when AI makes stuff up and blithely reports out things that are just not true — have been seen in AI notes, which are much more likely to have significant repercussions if they crawl into a patient’s records. This belief in AI’s infallibility leads to — in some cases — the thought that their products do not need to be fact-checked.
Vendor technology
When Note Assist was announced last year, the Alma C.E.O., Dr. Harry Ritter, said of their AI notes vendor, Upheal: “They’ve been fantastic to work with. After vetting several vendors and soliciting feedback from our own members, it became clear that Upheal’s technology provided the best and highest quality clinical documentation experience,” according to Vator News.
“We want to be a one-stop-shop for providers, giving them everything they need to run thriving, in-network practices that helps them deliver outstanding, quality care. Note Assist plays a critical role in that vision, enabling mental health providers to save time and be more present in the room with their clients,” Ritter said.
Upheal, a Czech startup, raised a $10M Series A round in late 2024, a round led by Headline with participation from Credo Ventures and Kaya Ventures, bringing Upheal’s total funding to $14.35 million.
In addition to servicing Alma, Upheal has free and paid plans for therapists, including secure video calls (without AI), unlimited AI-powered notes, notes generated from session recordings and dictation-to-note and text-to-note features. Features and prices vary from plan to plan.
Alma staff response
On the forum, according to the transcript, Alma’s staff members responded quickly to the complaints. One wrote, according to the transcript, “Thank you for reporting this issue! We are working with Upheal, our external partner, to understand what may have gone wrong here and will share an update as soon as we know more.”
He later added: “Thank you for your patience while we looked into this. We got an update from Upheal, our external partner, and it appears the issue you encountered was a result of the AI exhibiting a behavior we refer to as ‘hallucination’ — where the note output did not match or correspond with the inputs you and your client discussed live during the session. Essentially, the underlying model produced information that was not fully aligned with the data it was given.
“While the team is actively working to improve their models over time, this kind of issue can occur as they work to refine this emerging technology. We apologize for the inconvenience especially given the concerning nature of the error. Thank you for flagging this to our team as your feedback will help us enhance the overall Note Assist experience, making it better and more reliable over time. Please let me know if you have any follow up questions!”
Company statements
In response to questions from ClearHealthCosts, company spokesmen from Alma and Upheal issued a joint response.
We asked: How widespread is this among Note Assist users? How long has this problem been in existence? What is Alma doing to fix the problem?
“The issue you are referring to occurred in early December 2024, impacting roughly 1% of notes generated by Note Assist since its inception,” the statement said. “Impacted users experienced an issue where one section of generated notes contained inaccuracies about what had been discussed in care. We worked closely with Upheal to quickly resolve the issue in under 2 business days, and the issue has not recurred since. As with any new feature, we have rigorous quality assurance processes in place, including performance monitoring and provider feedback loops to promptly address bugs and continuously improve our platform.”
We asked: What advice do you have for therapists using Note Assist?
“Clinical leaders from Alma and Upheal played a critical role in designing Note Assist to support providers in writing high quality, clinically sound progress notes,” the statement said. “We understand that adopting new tools can come with challenges, especially when unexpected issues arise, and we’re here to support our members every step of the way. We’ve invested in training and education to ensure that providers feel confident using this new technology, and also understand their responsibility as clinical experts to sign off on every note to ensure accuracy.”
We asked: Some therapists have expressed reservations about Note Assist, saying they fear that Alma (or Upheal) is retaining session notes for possible use in training the AI, rather than expunging them. Can you comment on that?
“To ensure patient privacy, session note data on Alma and Upheal’s platform is secure and never used to train AI models. Moreover, data used to create detailed progress notes is automatically deleted once notes are finalized. You can see more information in our press release from last summer. We’ve prioritized protecting and securing providers and their clients’ data and privacy throughout the user experience from the beginning.”
Other AI note-takers
Upheal is not the only AI note-taker. Another big one is Mentalyc, which calls itself “the best tool for creating progress notes,” said Frederic G. Reamer, an expert on AI use in healthcare and especially therapy, who is a professor emeritus at the Graduate School of Social Work at Rhode Island College. Amazon’s AWS Health Scribe performs this service also, he said in a Zoom interview, and another is Epic, the giant electronic health records company, which has an AI notetaker as an option for practices that use its tools. Here’s a list of 23 AI note tools.
How good are they? Reamer has written a book on the ethical and risk management implications of the use of AI, titled “Artificial Intelligence in the Behavioral Health Professions,” which is out in this month from the National Association of Social Workers press. He said he has been trying to find empirical studies assessing these tools.
“My assessment as of today is there are almost no reputable studies on the validity and reliability of the software tools to document clinical encounters accurately. I can’t find stuff, so I think it’s just too early. However, I have a fair amount of anecdotal evidence, which I recognize is not terrific. What I’ve heard anecdotally — and I’ve also reviewed some of these notes — is that this software is really quite good at capturing what goes on in a clinical session. As somebody who does a lot of expert witness work in litigation and in licensing board cases around the country, I know firsthand that the quality of a clinician’s documentation often determines the outcome of a lawsuit and of a licensing board complaint.”
Omissions and commissions
Nevertheless, he said: “With the AI generated notes, I’m seeing errors, and colleagues are telling me they’re seeing some errors. A small percentage, but they’re there. They take two forms, omissions and commissions — stuff that should have been in there that didn’t get in there, and statements of fact or alleged fact that turn out to be inaccurate and need to be edited.
“Here’s my concern: I understand the appeal. It saves time. Documentation is tedious, it’s uncompensated, it’s a pain for most people. Practitioners are drawn to AI because it saves them time. I get that. I’m worried that practitioners who are especially drawn to this shiny new toy may also be the practitioners who are so eager to save time that they’re less likely to proofread the AI-generated note really carefully. And once you lock that note, it’s there. You can’t unring that bell.
“My worry is that I’m going to start seeing problems in litigation and problems in licensing board cases, when the AI-generated notes are subpoenaed. I see lots of notes subpoenaed, and then the lawyer on the other side of the case discovers these errors, and then they use them against the practitioner. That’s what I predict is going to happen.
“Is it going to be rampant? I doubt it. Is it going to be pervasive? I doubt it. But I do think it’s going to happen. A lot of that, again, is based on anecdotal reports where practitioners have said to me, ‘Oops, there was an error. Didn’t catch it.’
“I was actually talking to one client in a workshop of mine. She’s both a practitioner and a client, and she said she gave permission for her clinician to record the session. She reviewed the note, and thought it was really good. but it contained errors. Both she and the clinician said this is probably about 95 to 97 percent accurate — pretty darn good, but not perfect.”
Assessing the tools

Dr. Maelisa McCaffrey, a licensed clinical psychologist who counsels practitioners on documentation at QAPrep, did a series of YouTube videos assessing different AI note-takers. In the second one, done in January 2024, she notes that the field is changing rapidly, and that this video and others could quickly be out of date. She also points out that the claims are generally that the AI note-takers will cut time spent on documentation, improve the quality of documentation, and avoid burnout.
There are generally four models, she said in a Zoom interview: Client and therapist join an online session and the platform listens in and records, then creates a transcript and then creates a progress note from the transcript; therapist uploads a recording of the session and AI creates a note; therapist logs into the platform and dictates notes and AI makes a summary; therapist logs into the platform and types in a summary, and then the platform will create a progress note.

Does AI write faster? she asks. Sometimes, but not always. Does the note prove to be of higher quality? she asks. Not if it doesn’t tie the session back to treatment plan goals, symptoms and diagnosis. And of course the note needs to be edited to avoid mistakes and satisfy these requirements, she emphasized.
One current trend, she said: Most of the AI notetakers are not integrated with an electronic health record, but the industry is moving in that direction. That would eliminate a step — copy-pasting the finished note into another place. The therapist would simply turn it on in the electronic health record and the process would proceed. Some of that does currently exist, and she said it is increasing.
Reviewing the results
She has done reviews of several specific note-takers, she said, using a simple methodology: Sign into a platform and create a progress note, then assess the performance. She said Upheal was one she would be least likely to recommend; while it had many strong points, she said in her Upheal review, it was wordy, made some major omissions, and also made some eye-popping errors, for example mentioning pregnancy (which was not in the session) and improperly recording names.
McCaffrey also said she could see how the AI note-takers would be useful in supervision, sending an alert about something that had been missed, or noting patterns. But as far as overall usage — she said the editing load is substantial, and it’s not clear it’s a silver bullet. “As someone who trains therapists on mental health documentation, I am not worried that AI is going to negatively impact my business in any way, anytime soon,” she said.
That said, she did note that among her clients are a number of therapists who find documentation challenging — people with ADHD, or technology challenges or time management challenges. For these people, she said, they are embracing the technology even with its flaws. Older therapists, and those who do not find documentation as challenging, may be less eager to embrace it.
Business associate under HIPAA
Part of the conversation revolves around the agreements on use required by the Health Insurance Portability and Accountability Act of 1996. To comply with the act, companies covered by HIPAA need to guard data, and if “a covered entity engages a business associate to help it carry out its health care activities and functions, the covered entity must have a written business associate contract or other arrangement with the business associate that establishes specifically what the business associate has been engaged to do and requires the business associate to comply with the Rules’ requirements to protect the privacy and security of protected health information,” the C.M.S. HIPAA web page says. With that business associate agreement, once the information is on the associate’s platform, they become responsible for any confidentiality breaches that might occur on their end.
It would be wise to acknowledge that the “AI-first” movement emanating from Silicon Valley seeks to hold AI responsible for managing its own power and development. In truth, a lot of AI romanticizing seems to ignore things like the AI hallucinating observed by the Alma therapists, and to kick the can down the road — “It’s a beta version,” or “of course it’s not fully functional, but it will be.” As Brian Merchant writes over at the tech blog “Blood in the Machine,” AI is in its “Empire era,” where JD Vance can claim that “Elon Musk and the DOGE boys are at work excitedly gutting the federal government under the auspices of an “AI-first strategy.”
There is a generational divide in AI-notetaking use, Reamer said. He gave a training recently to a group of therapists in the greater Washington area. “The comments clearly indicate this generational phenomenon. I don’t mean to be ageist, but the older therapists said, ‘I didn’t sign up for any of this stuff. I use pen and paper, or maybe I have a Word document on my computer summarizing notes.’ And then some of the younger therapists are like, ‘Hey, this is cool. Yeah, I don’t have any problem with this.'”
What about regulation?
Regulation is nascent, Reamer said, perhaps because the field is so new. The Utah Department of Commerce set up the Office of Artificial Intelligence Policy last May, Reamer said, trying to come up with reasonable regulations pertaining to the use of AI. He said officials there told him they were starting with behavioral health.
The University of Southern California has also launched the U.S.C. Center for AI in Society, he said.
“I don’t want to be glib or sound judgmental, but I’m just worried about well meaning clinicians who are to use a phrase I used earlier, drawn to the shiny new toy, and don’t take the time to really understand the limitations, the ethical implications, the risk management implications, because it’s very seductive. So yeah, I think we’re going to see some sort of bad outcomes here.”
Another concern is with the storage and ultimate use of the recording, transcription or note. Therapists say Alma says it expunges records, but therapists say frequently that they suspect that any data companies collect have may be used for training AI to do a number of things: Predictive analytics, or creating an AI chat therapist, for example.
State laws
State laws may also come into play.
A New York lawyer, Bruce Hillowe, wrote in a recent quarterly newsletter: “Using Artificial Intelligence (AI) to create therapy documentation. Many clients have asked about this recent development. AI tools require that a therapist either conduct telehealth sessions or upload audiotapes of sessions including of in-person sessions, and then with that data the tools may generate summary session notes; offer ‘insights’ about patients for private review by therapists, … suggest evidence-based objective assessments and interventions for use by the therapist; and give objective data about sessions such as their duration and amount of time that the therapist and patient spent silent and speaking. Therapists using the tools find that they save time, although they do not eliminate the need for the therapist themselves to review and edit the AI-generated documents. Therapists remain fully and exclusively responsible (and potentially liable) for the adequacy of records, even ones generated by AI. The most popular producers of AI for therapists seem to be Upheal and Mentalyc.
“In its criminal law, New York is a ‘one-party’ consent state, which means that only one party to a conversation must consent to recording it. However, to record healthcare conversations, federal HIPAA regulations require that practitioners obtain written patient authorization. Special consents specifically referencing the use of AI are advisable, including stating the purposes and use of the recording. That consent should also characterize the recordings as ‘psychotherapy notes’ under HIPAA and ‘personal notes and observations’ under NYS PHL 18 that are maintained only temporarily, destroyed at will by the therapist and inaccessible to patients. Some AI programs automatically delete audio files after analyzing them. Other HIPAA requirements are that a Business Associate Agreement be signed with the AI program if private health information (PHI) is to be entered into their platform, and that a security risk assessment be conducted using HIPAA guidelines and documentation.”
What you can do
Reamer said he tells seminars and classes: “Look, you’re grown-ups. So you get to decide whether you want to use this technology or not. I understand some of you probably will. My advice is, proofread this stuff. You pull out a huge magnifying glass, read every word, make sure it’s accurate. Edit, edit, edit, before you click enter. Whether people are going to do that, I don’t know.
“Part of my job as an expert witness is to review subpoena documents. I spend tons of time doing that, and I find lots of problems.”
Barbara Griswold, author of the blog “What Every Therapist Should Know About Insurance” and the book “Navigating the Insurance Maze,” is a therapist who advises other therapists on the business of a practice. She wrote in an email: “AI is being touted as THE answer to reduce the burden of documentation. But experts I’ve talked to say it isn’t the magic bullet we think it is. And while I’m not an expert on AI, my general sense is that therapists are rushing to embrace AI without understanding some of the dangers or ethical pitfalls, or even knowing what questions to ask before utilizing AI in their practice.” Here is her article on this topic.
The post recommends the series of videos on YouTube by McCaffrey, giving some fundamental principles and resources, and reviewing some existing AI notes products.
What therapists should do, McCaffrey said:
- Make sure you have a business associate agreement, or that the platform has one.
- Always review the note and any associated products carefully to make sure it is accurate.
- Test different platforms to see what works — not all are created equal.
- Have your criteria for success in mind.
- Keep in mind that it can take time to learn and train the AI — for example, giving directives about what you want your notes to look like, and adding to them if needed.
- She offers a free AI ethical use rubric here. Her YouTube playlist for AI for notes is here.
