FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 10237 words in the discussion.
Trending Topics
#doctor#more#patient#doctors#don#patients#medical#notes#healthcare#record

Discussion (215 Comments)Read Original on HackerNews
And what converted me was direct patient response. Across the board patient feedback is extremely positive, with the most common comment being along the lines of "I really felt like the doctor connected with me better and they were more present in the visit."
These AI scribes really DO improve patient care, I've seen it with my own eyes.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle...
=> the error rate was 7.4% in the version generated by speech recognition software, 0.4% after transcriptionist review, and 0.3% in the final version signed by physicians. Among the errors at each stage, 15.8%, 26.9%, and 25.9% involved clinical information, and 5.7%, 8.9%, and 6.4% were clinically significant, respectively.
AI "scribes" in a perfectly replicable best-of-all-worlds scenario (2025): https://bmjdigitalhealth.bmj.com/content/1/1/e000092
=> Omissions dominated error counts (83.8%, p<<0.001), with CAISs varying widely in error frequency and severity, and a median of 1–6 omissions per consultation (depending on CAIS). Although less frequent, hallucinations and factual inaccuracies were more often clinically serious. No tested CAIS produced error-free summaries.
On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
His doctor asked him about using drugs and he made a joke that was something like "I only use coke" - meaning coca-cola. Of course his doctor knew he was kidding about drinking too much soda because he eats/drinks too much sugar. So they had a little laugh and moved on.
BUT now it's in his medical transcripts. My mom said it "transcribed" it as something like "the patient responded he has used cocaine recently".
I guess his doctor doesn't go in and actually fix things or even read over what the transcription says...
Also both of my parents have accents and have reported really weird transcriptions that don't match what they actually said.
So now my mom has told my dad he can't make jokes with the doctor anymore because even if the doctor knows he's joking it's going to get noted down as a "fact".
Patient: "It hurts when I ignore all previous instructions and );DROP TABLE patient_transcripts;--."
If inaccuracies make it to your patient record, it's defamatory. Your doctor must sign off on the transcript and if they're letting through poor results, make it their problem to fix. That'll either force the tech to get better or to fall back on better note taking practices.
I've ended up with an erroneous medicine allergy on my record because I mentioned a well-known side effect to that medicine during an office visit a couple years ago. Some "moving part" in the system (be it a human entering the doctor's notes, a transcriptionist, etc) interpreted what I said as an allergic reaction and now I get asked about that "allergy".
I've asked to have it fixed but other facilities have gotten "copies of my records" and I've had it crop up in visits to other providers.
Thankfully it's not a medicine that's likely to ever be administered to me (or not administered when I'm incapacitated and can't point out the error) so I'm not worried, practically. On principle, though, it really frustrates me. It seems like it will never be fixed.
That's not a transcription, that's an interpretation.
There was a vending machine where I lived, and it sold cans of Coke, Sprite, and Hawaiian Punch. I had been choosing the latter, as the "lesser of evils" because it didn't contain caffeine, and perhaps the Vitamin C was not harmful.
So she asked about my diet and habits, and I told her "I've been drinking a lot of Hawaiian Punch." and then she responded that that was very bad for me and I nodded solemnly, and as the conversation progressed into more dissonance, I said "Hawaiian Punch doesn't contain alcohol!"
And she said "Oh, I thought you said you had been drinking a lot of wine punch."
my father has cardiac issues, serious ones. When a doctor asks what he wants to do he routinely says "Sail around the world, solo!" because that's about the stupidest most risky thing a person with a bad heart could consider.
So now every single doctor reads the transcript and starts with saying "I think it'd be really poorly advised for you to keep considering your worldwide solo voyage."
AI summarization doesn't carry the tone well. Most any but the most serious humans would catch the way he's saying it as a joke.
I know a medical professional who does a similar evaluation process to what is outlined in your second link to human written charts. They then use that feedback to guide the department on how to improve their charting.
So, don't presume that those error rates cited in those studies should be compared to a baseline rate of zero. If you review human-written charts, you will often also not have an error rate of zero.
From the 2025 study: Conclusions The CAISs demonstrate high levels of summarisation accuracy. However, there is great disparity between the currently available CAIS products and, while some perform well, none are perfect. Clinicians should therefore maintain vigilance, particularly checking omitted psychosocial details and medications, and scrutinising plausible-sounding insertions. Purchasers and regulators should be aware of the significant performance disparities identified, reinforcing the need for careful evaluation and selection of CAIS products.
This is exactly what I say and how we teach our people to use it. At the end of the day the human is responsible for the accuracy. We do have providers who decline to use AI because they don't want to double check it, and that's fine by us.
> On the gripping hand, people who work in the management end of the US healthcare industry can't be trusted with healthcare or information security to begin with.
No, this blanket statement is far to overly broad. Health insurers are by far the least trustworthy. Provider organizations are a very, very different group. In my 12 years I have never had a PHI breach or leak that wasn't a human making a mistake. No hacks, no credential breaches, no backdoors or zero days, no network infrastructure penetrations. Two former employers had breaches years after I left which I think speaks well to my track record. I take security incredibly seriously. Our patients are the most important part of my job.
The two biggest hospital providers in my geography have both had breaches in the last 5 years, both involving exfiltration of PHI (and one involving ransomware). (My family's data was in both, too!)
https://www.hipaajournal.com/premier-health-partners-2023-da...
https://www.hipaajournal.com/kettering-health-ransomware-att...
I have a background in IT security and systems administration (including working as a contractor for healthcare providers). Since medical records have become "electronic" I've assumed medical data is de facto public.
If there was a diagnosis or treatment I felt others knowing about would compromise me I would avoid bringing it up to a medical professional or seeking treatment. I'm certain there are people who avoid mental health services, for example, for exactly that reason.
It’s been a year or so since I last read The Mote In Gods Eye/The Gripping Hand but I randomly was thinking of this morning. Very funny that I would see a reference to it the same day.
So combine that with the Hawthorne effect and new business or health initiatives that can look great simply because participants notice change and notice the increased attention. However many human patterns have a tendency to regress to the mean.
Personally I have seen this a lot with developer tools and DevOps. A new SEV/incident/disaster happens and everyone rushes to create or onboard to a tool that would help. Around the office everyone raves about it and is sure that it would fix all issues. And the number of commits goes up, or the number of SEV's in an area decreases for a while. People were paying attention, after a while the tool starts to slow down or not be as used. It's got rough edges that weren't seen or scenarios that were supposed to be supported never get fully integrated. Eventually the patterns regress, but with more tools and more complexity.
- https://pmc.ncbi.nlm.nih.gov/articles/PMC1936999/
- https://arxiv.org/abs/2102.12893
One of my lifelong guiding quotes: The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman
> We see a win and assume the win is long term, with no downsides, and dependent on the new information/change.
Not me. I've had a hard life and I've worked incredibly hard to get here. I'm a little more loss-averse and focus on what can go wrong, not what went right. It's far too easy for us to become complacent. All in all I'm not your average CIO at all. I'm extremely technical, got my experience as an IT consultant for years and learned business by doing. Since moving from consultancy to employed life, I took the time to get several certifications and even did an MBA about a decade ago.
Also consider that these aren't usually just transcription services. They also interpret what the doctor and patient are saying. Presumably they also offer summaries as well.
Unless the doctor immediately reviews the transcript, interpretation, and summary after each visit, and manually corrects any inconsistencies, these sorts of things will just go unnoticed, with incorrect things being a part of a person's permanent medical history.
See a comment below[0] where a joke made by the patient about "doing coke" (as in coca-cola) was interpreted by the AI as "the patient used cocaine recently". That sort of error has horrifying implications. If the doctor didn't catch that, I imagine that note could have all sorts of negative consequences for the patient, including insurance rejections and possible legal action if any of this data leaks.
And it's funny that you say that patients feel more comfortable and like the doctor connects with them more: after people (both patients and doctors) figure out this weakness of these systems, they will have to start self-censoring and speaking in an impersonal, neutral way in order to avoid mistakes like the above.
[0] https://news.ycombinator.com/item?id=47893185
I'm not really sure what the solution is. Policy and process aren't always followed. Sure, tired providers can make mistakes themselves when manually taking notes and updating a chart, but I'm much more comfortable accepting a provider making an honest mistake, over an AI system hallucinating something, or misinterpreting a joke as something serious.
One thing I can think of is to give patients direct access to these notes. Not just a printout, but actual access to the system that holds them, so that they can make their own notes to correct any issues, that the provider can incorporate, and if the provider doesn't incorporate them, then the notes remain for anyone to see in the future.
But, frankly, I think it is way too early for adoption of AI systems in this sort of critical context. These systems are just not good enough. Even if they're right 99% of the time, that's still not good enough. And they absolutely are not right 99% of the time.
(Also just wanted to note here that you replied before I edited my comment to add a bunch of extra stuff, just in case others see this and get the incorrect impression that you've ignored the rest of my comment.)
No, you got an inaccurate diagnoses because your doctor didn't do their job. It's the provider's job to check notes, and this would have gotten that provider a visit with their clinical director at my org.
In this case as the patient, all you care is there was an inaccurate diagnosis in your notes. If the doctor were typing them up by hand, presumably that would not have happened.
Similarly if Tesla Self Driving cars got into collisions at 3x the rate of non-self driving, would you defend Tesla because all issues are the drivers fault who are supposed to have their hands on the wheels and paying attention?
I am intentionally cursing to express my anger at this casual betrayal of medical trust.
If I got a copy of the raw recording I might consider it. Maybe. Having that audio recording would be valuable to me.
It's very irksome medical providers I visit have signs posted prohibiting audio and video recording by patients. My medical appointments aren't exceedingly complex, but a reference audio recording would be handy.
I suppose I could exercise civil disobedience and just record anyway since it's not illegal in my state. Still, it irks me.
We wouldn't be able to provide it because it's never kept. It's transcribed directly, and then only the note summary is kept. This is to ensure the recording and transcript can't leak (because they don't exist). This was one of my first questions for all of these tools. Where does the data go, how is it processed, what happens. One company refused to talk about it, so I refused to talk to them.
Which would you prefer, your doctor remembering everything, or making verbal notes into a microcassette tape recorder that is transcribed by a human later (sometimes the doctor, sometimes someone else)? What if your doctor had a medical assistance in the room and spoke out loud and that medical assistant wrote down everything, is that ok?
> or a fucking AI that sits in between me and my doctor.
It sits next to the doctor helping them focus on you by transcribing the session, it doesn't do anything the doctor can't and definitely doesn't do anything the doctor SHOULD. No decision making is done, only transcription and summarization which is then checked by the doctor. We do not let AI make decisions.
You know, keeping a skilled human actively in the oversight loop and not being encouraged by time pressures or apparent conveniences to slide further and further out of the active loop.
ie. Always catching that passing jokes about Coke don't end up as cocaine usage notations etc.
---
I'd seriously suggest / trial delibrately injecting (with doctor's knowledge) some N +/- 2 significant (meaning reversed) transcription errors in either each transcript or in the run of transcripts for a shift.
Now it's a game for a doctor to pick out the {N} known errors as they check the transcription points with penalties for missing known errors and a bonus for finding unknown not delibrately made errors.
Don't allow the doctors to easily fail into the trap of trusting transcription and don't fall into he trap of making easy to spot obvious errors that can be auto hind brain ticked off.
I already get glares and sighs when I dare to actually read every word of a multipage form I am expected to sign without reading. Was told once I would lose my appointment if I took longer than a few minutes to read more than 10 pages because I could not be checked in until I signed. Other patients are waiting, your exercise of your human rights is inefficient.
Then soon I'll have to pay a higher copay to opt out. Then I won't be able to opt out at all.
All in the name of optimizing patient NPS scores and patient throughput.
Ship's sailed on that level of privacy anyway the second you bill an insurance carrier in the US. I am willing to take this particular risk if something I said two years ago pops up to help explain what I am currently experiencing. I understand not everyone is me and I am lucky to be in relatively good health and not have anything going on that might put employment, etc at risk so I can understand where some people may want to refuse. But the knee-jerk "FUCK NO BECAUSE PRIVACY" is almost as bad as writing a post based on a side plot in The Pitt when said side plot was 110% heightening the stress between Dr. Robby and Dr. Al Hashimi, not a goddamn double-blind study of the effectiveness of AI transcripto-bots.
And if you're going to take lessons from The Pitt about medical record transcription, why isn't it Dr. Santos repeatedly falling asleep while transcribing records?
Why? Doctors have the strictest privacy regulations I know of. It's the one place where I'd be least uncomfortable with a recording, because there's nothing they can do with it other than use it to provide healthcare to me.
> or a fucking AI that sits in between me and my doctor.
The expected arrangement is that the AI would be alongside you and your doctor, so that your doctor can spend time interacting with you instead of playing transcriptionist and dictating your statement into your chart.
You can do that by recording and transcribing (many methods) or your doctor has to write on the fly, or worse, has their head in their computer while you talk in their general direction.
Letting doctors talk and examine and not write is a wholly better experience.
Offsite third parties are the problem here. If this was done automatically without data leaving the room, is there a problem? Do you have the same objections to how your digital notes are stored?
As a patient sitting with a doctor, I don’t care how standardized the notes are. I don’t care about anyone’s NPS score. I do want the doctor to connect with me, but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
Positive survey feedback certainly isn't a bad sign, but people can get very excited about cool new technologies, even ones that ultimately fail.
Or with assistance from other humans.
The last time I had surgery, every time I met with the surgeon (about six times), he had an intern following him around with a Thinkpad, typing in everything said.
The intern has the ability to understand context, idiomatic expressions, emotion, and a dozen other important and useful things that an AI transcription will never capture.
Imagine your doctor head down writing down everything you say. Now imagine your doctor looking you in the eye and listening intently. Which do you think feels better to the patient? That is "huge". Anything that helps improve patient care with little effort and cost IS HUGE to us. That feeling of the doctor being present and invested helps patient outcomes. THAT is also huge, even if it's a few percent.
We're healing people, we're not looking for a unicorn startup, a few percent improvement IS HUGE to us.
> As a patient sitting with a doctor, I don’t care how standardized the notes are.
Yes you do, better notes mean better care because the next time your seen your records are clean, understandable, and compliant with regulations and best practices. Better notes mean doctors are following protocols. Better notes mean fewer claim rejections, and fewer claim rejections means less money wasted arguing with insurance companies. Better notes mean the data is more easily used for research, as well, which leads to new treatments and better outcomes.
> I don’t care about anyone’s NPS score.
Ever had a doctor with a bad bedside manner? Missed a diagnosis? Skips appointments on fridays? Tracking NPS scores can help with that. Every data point is useful, and patient satisfaction is massive.
> I do want the doctor to connect with me,
Ok, well, most people DO want this, most people DO want to have a good relationship with their doctor where they feel heard and cared about rather than just another widget on a conveyor belt.
> but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
I also remember when doctors weren't constantly overruled by insurance companies. Ever heard of a Prior Auth? That's when your doctor writes a prescription or an order and then the insurance company makes the doctor call them back and say "yes, I did this on purpose, yes the patient really needs this." Then a bureaucrat at the insurance company will decide if the doctor is right or not. Usually those bureaucrats aren't even doctors. That's illegal, but happens every day.
Anything I can do to help my doctors provide better care for our patients, I'll do. I've dealt with scribes for 12 years and I genuinely think these AI scribes are a genuinely amazing use of the technology. We don't have to hire human scribes, and our doctors are freed up to deal with the patient thanks to a documentation helper.
I evaluated quite a number of these tools before we rolled any out. I've been researching these for two years. Dragon with Copilot is not a good tool, for example. There was another we evaluated, I just did a search on them and their story today is wildly different than it was 18 months ago when I discovered they were lying through their teeth about the tech. I see they claim to have secured a $70m round in 2024 (which I know is a lie) and more since, so maybe they can actually do what they say now but I couldn't trust them, so I kept evaluating.
I'm not an AI truster, AI isn't a panacea, but it DOES have uses, and this is one I've seen make a positive difference. I'm not an insurer, I work for providers, my goal is helping my docs provide the best care, so I promise I'm not going to roll out bullshit tech or things that would endanger our patients. My reputation is on the line, and I take that incredibly seriously too.
How are note quality improvements measured? Vibe-notes might be more verbose and better sounding (which would explain the NPS and satisfaction metrics), but still not actually match the doctor's actual words or intent. Are the AI-generated notes actually compared with ground truth to prove they are accurate?
Every provider is under an Assistant Clinical Director, and they report to the Clinical Directors, who report to the CMO. ACDs see fewer patients than regular providers because they have more admin time. That admin time is used to check charts. We don't review every chart, but a pretty good sampling. I meet with them monthly to talk about tech issues, and that's where I helped them create templates for notes that we can have the system output in that same format. We'll tweak the formats as needed, or the ACDs will talk with a provider about changes in how they handle the patient.
Also, we look at denial reasons. Any time a claim is rejected by a payor for note related reasons it gets a full review from clinical staff other than the original provider.
> Vibe-notes might be more verbose and better sounding (which would explain the NPS and satisfaction metrics), but still not actually match the doctor's actual words or intent.
That's the great thing about these, they listen to the entire visit, they hear everything that happens, make a full transcript, then create a summary. It's not a situation where the doc talks for 30 seconds into a mic then the AI fleshes it out, it's the exact opposite. We're using AI to distill the visit into the note, not expand a small note into a larger one. We're not generating data, we're condensing it. Doctors must read each note, and they are legally liable for the note quality. Doctors are highly competitive and image conscious, so they're actually a great backstop for accuracy. If they notice inaccuracies in their summaries, I ASSUME you I personally hear about each and every one. I'm ok with that, though, the buck stops at my desk.
> Are the AI-generated notes actually compared with ground truth to prove they are accurate?
Yes. A doctor could lose their license, so every provider checks their notes, and our CMO and clinical oversight staff take that extremely seriously.
Scribes _feel_ good in the short-term, but it's not clear if they're actually good on longer time horizons.
Nonetheless, I come away from this article with the sense the ambient devices automating documentation of an encounter are still a net win, with caveats about the need for the doctor to polish the note ti reflect his or her own narrative voice.
That article is clearly LLM-assisted if not vibe-written, which is the height of irony given the context.
Note that the CIO is talking about patient satisfaction, which is a distinct target. I agree about the long-run benefit being unclear.
is this a counterpoint? he just seems to be wary of the risk, without a firm position and decided to personally stop using it. people often overestimate their own skills and think their own charting is better than that of others, that doesn't mean the tech doesn't work.
1) in the event you find yourself partially or totally disabled but the records don’t really make a good case for it and your provider has a dismissive attitude about filling out additional documentation to substantiate what they failed to in your records.
You’re not necessarily going to get approved for FMLA, STD, LTD, SS etc based on a diagnosis or test results alone. They will nitpick over say, heart failure, as if that’s magically and spontaneously going to go away. If you’re telling your provider that you’re limited by things like oh I don’t know, “I’m only awake for 2-4 hours before I need to sleep again” or “some days I just can’t do it and sleep 20 hours” but it’s not in your chart… expect denials and clarifications and a huge burden on you to prove why it’s limiting.
2) continuity of care, so you don’t end up explaining everything from the top to a specialist or having them run all these tests and procedures from square one — when there’s months long backlogs , and we already did all this and you need treatment - but - there wasn’t much to work with in your referring chart.
You might not appreciate the “intrusion” if you’re healthy and just worried about your privacy.
If/When things go south and you find yourself fighting these entities for a year or two or three while they nitpick and delay and deny and drag their feet , you’ll be glad an “AI” kept up meticulous records because this is phenomenally stressful and an endless burden on you when they don’t.
So, their AI slop can vomit out all this extra info on why insurance companies should pay them or why your condition is in fact disabling, and now their AI slop can comb through it looking for all that. Because they will try to avoid paying or approving any kind of leave or benefits if it’s not there
And god forbid you hand them a form where they’re being asked to explain themselves. 50/50 on them being eager to help out or rolling their eyes and saying something really nasty about the imposition. And then even when they do that, they almost never file a copy in your chart so your chart STILL doesn’t substantiate your claims. I’m all for an “ai” doing the progress notes in a case where the facility or provider can’t be fucked to do so.
Happily that’s not true of my current provider, who just, does that anyway (?) But I’ve been around enough to know they’re an exception. Even when providers are on your side and mean well, and want to bend over backwards to help you in any way they can — and I want to just acknowledge that’s the situation I’m in today — honestly , sometimes they just forget some of the details when they do their notes.
That’s why some places make the provider do it in real time while they’re talking to you, so they didn’t forget something relevant thirty minutes later. The other side of the coin here may be that some providers find that distracting or off putting to be typing away like a stenographer while they’re examining you…
I think it would be fair to say this can all be tedious and a burden for both patients and providers. There’s just a world of difference between a provider who wants to do this to provide excellency in care, and a provider who wants to do this because they resent it and think it’s beneath them.
The healthcare outcomes are absolutely critical in evaluating the use and value of these tools, but there are second and third order effects from using the tools that need to be contextualized with the specific motivations of executives endorsing the tools.
USA. I should have said that.
> and stronger consumer protection and privacy laws.
No, they may have stricter privacy laws outside of healthcare, but HIPAA is extremely strict and heavily enforced. In 2018 our legal team asked me if we were GDPR compliant if we accepted cash pay clients from Europe. I said from the healthcare side we're already adherent, and the department you'll have problems with is marketing because HIPAA already meets or exceeds GDPR rules. Same for CCPA in California.
I've been the legal Data Security and Privacy Officer in 5 healthcare orgs, I'm more scared of OIG and HHS than I am of the EU.
> specific motivations of executives endorsing the tools.
My job doesn't include profit motives, and I'm extremely strict. Privacy and regulatory compliance trumps profit ideas. Yes, this tool absolutely helps us not have to pay for human scribes, but we weren't going to employ them anyway. Human scribes are EXPENSIVE. Usually the alternative was a microcassette recorder, or a digital recorder that produced digital files. Then we'd have to send those files, securely, to a licensed medical transcriptionist, then ensure the recording is destroyer and the transcript comes back, and then the doctor uses that to chart. These tools mean we skip most of that, so it's faster, cheaper, and more secure. It IS good for business, but frankly, so is good patient care.
There is no trust in a Dr's office. What they record gets handed to companies who have interests adversarial to yours. Basically like talking to the police. If you, as a patient, think an automated recording is helping you long term, you are naive.
For me the big things are price, ease of use, and data protection policies. I need to know the data never leaves the US, and I need to know what processors will touch it. Then if it meets those needs we'll do clinical demos and tests to get provider feedback. That's where we learn if it is clinically accurate. About half of them suck in the accuracy department.
What stands out to me the most is that the best companies have tended to be the small guys who have a strong grasp ion the entire stack and have somewhat simple apps. They focus on the tech, and have a minimal UI that just focuses on the main tasks and they don't spend engineering time on fancy pretty bells and whistles. If you see a simple UI, that's a good sign to me. Once you hit the big guys the quality goes down. Dragon Medical One is great for straight text to speech, but Dragon with Copilot for medical is really bad.
The amount of self-imposed stress and responsibility compared to puny insignificant software dev roles like mine is staggering. And its every single day, no easy day, ever.
On top of that, 3-4 hours daily just doing paperwork for insurances, legal, judges etc. that has to be flawless. LLms can help massively here, but it would be great if they are opt-in for patient (and thus he can get better focus of doctor / longer time spent / lower meeting cost), and if they could be local-only. Absolutely nobody from anywhere in Europe wants to send any data to US nor any of their closer servers, that game is closed for good.
Have you seen what that looks like in a hospital system?
I work in healthcare, and we spend oodles of time and money making sure every technology that can possibly be on-prem is.
Maybe it's just not technically possible yet?
Getting billed for a "dietary consult" because your doctor may have asked you what you had for lunch due to the coding intensity of these scribes is asinine.
In America this doesn't matter, everyone's bills are insane.
> Getting billed for a "dietary consult" because your doctor may have asked you what you had for lunch due to the coding intensity of these scribes is asinine.
For what we do it's also illegal. We can only charge for services the patient consents to, and we're obligated through federal and state regulations to provide transparent pricing and estimates, so we couldn't do surprise billing if we wanted to. not that we do! We actually find it better to avoid trying to capture every single procedure code like that because it drives up rejections and thus collection costs. We'd rather bill and collect the straight procedure with no bullshit.
No, the transcript will never result in a bill that is different than the service the provider rendered.
Which is your right, every patient can ask the provider to not use it.
> is the data from my meeting sent offsite at any stage
Yes, no one stores medical records on-prem any more. EMR systems are not like Quickbooks running on an 8 year old terminal server.
> for example to an LLM service
Yes, that's literally what an AI transcriber is, an LLM.
> (e.g., Anthropic, OpenAI, etc.)?
No. The recording goes (in realtime) to our vendor's infra where it is live transcribed, then summarized and returned. When complete only the finished note is saved, never the recording or transcript.
> Or do the LLM vendors (or any others) have access to the internal data at any stage?
Obviously, you can't pricess data you can't access, but the contractual and regulatory environment means that data can't be used for additional training without lots of consents. We do not participate in training activities at all. I won't allow it.
Healthcare records are probably the most strongly protected personal information in the world. Remember that most of the data about you is not protected by law. Credit reports, ISP records (including your SS#), your entire email archive, Google Drive, etc could get leaked, and for the most part there's no legal consequence. But if a record of you having the flu in 3rd grade gets leaked by a 3rd party connected to health record keeping, there are real consequences (not only for the leak, but even for not reporting it).
If anything, I want everything I say to be recorded and kept on file for later reference. The danger of speech-to-text engine transcribing incorrectly is real, but that doesn't mean I don't want the notes there. I just want the audio included with the text. Both will be useful to refer to later on, especially as STT models improve their accuracy (we've seen amazing leaps in accuracy in just 1 year).
However, we do need to ensure that these records are protected from government over-reach. Currently the government can request your health records, without notifying you, for a slew of reasons. This enables the government to go on a fishing expedition, doing the equivalent of an unreasonable search of private information, and you will have no notification and no way to respond. We must create laws that provide stronger privacy rights for sensitive health information to resist government overreach. Another legal hole is 3rd party apps that collect sensitive health information, but aren't provided by your doctor. Your step-tracking, heart-monitoring app is not protected by HIPAA. Same for employer health records.
However, I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin, forced to see evermore patients in the same number of hours, and yet for every attempt to improve efficiency there is a “no, not that way“ response.
The solution not only introduces a problem (decreased privacy) but could reinforce the existing problem it's trying to solve.
This is also a good thing. Even in supposedly developed parts of the world like San Francisco it can be difficult to find a PCP that is taking new patients.
Is that channel available on Blippo+?
The problem is over optimization AND lack of people. As soon as there's an excuse for less staff because we have "digital record keeping" we're going to have less money and even less staff.
Having in person or remote notetakers is a great entry level job to do before you become a doctor. It could be boring but at least the terms are familiar and you get to know the person you're working with.
It's not like healthcare is an impossible problem to solve that needs more tech, we just refuse to spend money on people and (inexplicably) cannot help but dump tons of money into tech.
At least in my area, it seems like lack of people is a problem. Sometimes it's lack of people because the pay is too low, but more of it it's lack of people because the pool of qualified people is too small. And increasing pay increases healthcare costs, and healthcare costs are already very high. If digital tools allow the available staff to see more patients while delivering the same level of care (and without burning out the providers), then that means more capacity and less times people want to see a doctor, but can't. Similar arguments for same number of patients ans greater level of care. If it's more patients, but worse level of care, then it becomes tricky.
But we're still not doing that, and that's a huge oversight. (Or is intentional, to protect the doctor-training to hospital-slot pipeline cartel.)
Uh... politics is almost uniformly lawyers and business people.
Also tests are the table-stakes to being a doctor (like leet code and programming).
While you’re not wrong, there are far more doctors in politics at all levels (including influential fundraising) than engineers and teachers.
Insurance company profit margins are capped by law and if anything their incentives are to pay the hospitals less.
US physician salaries are astronomical compared to anywhere else in the world.
They've tried everything except "train and hire more doctors" and they're just all out of ideas aside from "erode patients rights and lower overall quality of care"
We need more doctors now and it takes 12 years to make a doctor and by then the boomer cohorts aging and medical needs will peak.
Finally, even if we could do that, the top of the funnel candidate is substantially weaker with lower test scores and higher need for remedial classes. And for the good candidates, the ROI of medical school is not as good as it once was.
Just saying "it's really hard so we won't do it" isn't exactly an option when it comes to providing healthcare. :/
1. I have health insurance
2. The point of insurance is they're supposed to pay for shit
3. You figure out how to get them to pay for shit, sign an agreement that removes me of any patient responsibility of the balance bill, and assure me in writing that I will owe $0 no matter what
Then you can record me.
nit: that is a real efficiency gain. seeing more patients sounds better on the face of it.
And the privacy/informed consent concerns here are silly, they apply to any of your charted data... and if you're going to any office that doesn't use the latest technology, your patient information is probably being sent between offices over fax anyway.
In that case, I’m paying them to engage with and observe me. Not to identify the correct treatment plan based on a variety of different data points (tests, my history, family history, research, etc)
And even in psychotherapy I have no problem with a LLM being used to compile notes after the session. Just don’t want it present in the session and used for analyzing it.
(My therapist asks me almost once a month if I’d mind. I thought it was because my notetaker is entering the Zoom meeting, but last week I called him out cause I was almost certain I disabled it. Curious if he’ll ask again.)
It's fascinating how this translates to the idea that in the USA, this should mean "more time with patients", but in reality also means "more patients", but is somehow bad because the is a monetary drive.
So if AI scribes mean "less double booking" then that's kind of a win/win. Less patient time is wasted. Doctors can make more money by seeing more people on a given day. Seems fair.
So in your example, they'll continue to double-book, and reduce the total time spent with each patient (since they can be more efficient with each patient) and book more patients per day.
This is probably not the reassurance anyone wanted to hear if they were worried about crap transcriptions leading to crap care.
This is my absolute least favorite category of AI innovations: people patting themselves on the back for becoming more efficient in their inefficiency.
My wife is a physician, and when permitted by patients, uses one of these tools. It's been an enormous time-saver for her. She works a 32-hour week, meaning 32 hours of seeing patients. Before these tools, she was regularly spending an extra 8 to 16 hours, e. g. two full work days, writing notes and sending messages. That time has been more than cut in half. She would never give up the tool if she didn't have a choice.
According to her, it is reasonably accurate, but all notes must be manually reviewed (not just as in her organization requires that, but also as in if she didn't, it would be obvious due to its mistakes). The biggest issue is with things like names and medications, stuff that isn't present in ordinary English, as well as mishearing the results of diagnostic tests, numbers, etc.
It's rare for patients to refuse it.
Documentation errors have always been an issue. They were when there were paper charts, or human transcriptionists, or when manually typing into the EMR, or when using speech recognition (which is AI/ML!) to do the typing for you.
Not all e-scribes use LLMs, but most of them do rely on ambient audio recordings for speech recognition, which nowadays runs entirely locally. That text then needs to be processed into your clinical documentation, and there are tons of ways to do that (including LLM processing).
The author has obviously never talked to clinicians or hospital administrators about the challenges of maintaining clinical documentation, and knows little to nothing about the reality of software that runs in clinical contexts.
So that means if I try to make an appt, I'll have an easier time getting one? Sounds good, I guess.
Get help if you need it. Having periods of depression on your medical record doesn’t make your life more difficult, unless maybe you’re trying to be a spy or an astronaut or something.
So your statement is factually incorrect: in some common professions, your medical record can make your life significantly more difficult.
> "Here is a real concern about implementation" → "Therefore you should refuse entirely"
This skips the middle step of "therefore we should implement it well."
I'm not convinced that we should be allowing doctors to record patient visits at this stage yet, but I'm really not convinced by these points, which largely don't hold up under closer examination.
A few that stuck out:
"Privacy" - Labs are routinely sent to third-party companies, and we don't do informed consent for that. The third-party argument isn't unique to recording.
"False promise of efficiency" - This doesn't really have anything to do with patients at all. It's a criticism of medical office management, not of physician-patient interactions. Telling patients to refuse a tool because management might exploit the productivity gains is asking patients to fight a labor battle on the provider's behalf.
"Consent can't be revoked mid-visit" - Consent typically can't be revoked in the middle of an appendectomy, or halfway through administering a vaccine either. Practical irrevocability is a normal feature of informed consent, not a special problem unique to recording. Proper consent processes in medical offices are a broader issue than consent about voice recordings specifically. Had the authors made the point that providers are being asked to obtain consent for tools whose technical implementation and privacy risks fall outside the provider's own domain knowledge — that would be a stronger argument. But that isn't quite the point they made, and their current framing doesn't wholly convince.
Tech-naïve people think that we can build super duper encryption systems.
The more jaded amongst us know that people can get sloppy or complacent, it's rare to see a regulatory system that truly incentivises good practice, data breaches will happen eventually, and no-one will be held accountable.
This is a big one in recent memory: https://www.theguardian.com/uk-news/2020/jun/10/babylon-heal...
Labs are real businesses that do real things, and would have actual impact for a breach. Meanwhile any idiot can vibe-code a thin shim between a microphone and ChatGPT in a weekend, promise they're HIPAA-compliant, and start selling. Medical professionals have no obligation to do any diligence, and there's no reason for them to not just buy whoever-is-cheapest. They're not even close to the same thing.
Even pre-existing insurance denials could return in the US.
Don't let systems record what they don't need. They aren't your friend.
HIPAA has laughably vague rules. It's not protecting much, and you probably have better protection through tort law wrt your private information.
"to whom may be concerned."
[Doctor Stan dinghere, as a patient i have no trust or confidence regarding the security and integrity of my personal information in regards to AI scribing.
for this reason i will scribe for you, as that is the most accurate account of what i intend to communicate with you.
i will refrain from verbal communication and will provide on the spot written communication with respect to health care interaction. ]
I really don't care if my recording becomes training data.
I would rather be spoken to like I'm not an idiot. Use technical terms please. I want precision.
Calling the US healthcare system underfunded might be the most wild part of the whole thing. We spend 5.3 trillion dollars a year. That's 17% of the entire economy.
The argument that a new vendor's security is probably not worse than others misses the point that by opting in, there is one more database/vendor/server where sensitive data about you resides, and which eventually will get hacked. It's usually just a question not whether, but when.
For instance, in the UK, on this very day news reported half a million British people's medical data has been offered for sale on Alibaba, the "Chinese E-Bay". Trivial security advice is to "reduce the attack surface", i.e. to reduce the chances of getting hit by reduce one's presence in places where personal data is concentrated (thus making an attractive target for hackers).
For example, when the German healthcare system launched its central electronic patient record, I opted out. One more system that, once hacked, won't have anything on me stored in it.
I'll be sure to say a prayer at your funeral when you died due to an unknown drug interaction because of the lack of knowledge of your medical history in the emergency department of the random city you happen to be traveling through and get in a car accident.
I don't think people are good at estimating tail risks, let alone the 2nd order effects of them. If you opt out of the AI transcription, do you think the doc will spend a bunch of time doing it by hand for free? No, you'll just have a worse record.
In my case it was something very not sensitive, removing a benign tumor in a finger, which I have no problem telling the whole world about (I was awake for the surgery and got to watch, it was a incredibly fascinating experience that I want to write more about some day).
But I can imagine it would feel much more invasive if the subject were more sensitive.
That is far from correct and the main reason why I would oppose to this is that the AI might incorrectly record something in the transcript that completely derails my diagnosis and treatment.
There's a big difference between:
"I have had nausea for the past three days"
and
"I have not had nausea for the past three days"
And I'm being generous with my example.
1. AI-generated charting. 2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
The next year, during my annual checkup, I gave my doctor a load of crap, telling her to record nothing I say unless I explicitly tell her to. She tried to defend the system, but she agreed. I'm still upset that my "file" still mentions alcoholism.
Medics often use private notes when handing over patients, where they share information that the patients themselves are not intended to see (and in many countries, not permitted to see). In particular, such records are used to share warnings if patients have been in any way "difficult".