Google removes AI health summaries
The real source of high medical costs is the entity that sets the hospital bill in the first place.
The explanation is much simpler than people want to admit, but emotionally uncomfortable: doctors and hospitals are paid more than the free market would otherwise justify. We hesitate to say this because they save lives, and we instinctively conflate moral worth with economic compensation. But markets don’t work that way.
Economics does not reward people based on what they “deserve.” It rewards scarcity. And physician labor is artificially scarce.
The supply of doctors is deliberately constrained. We are not operating in a free market here. Entry into the profession is made far more restrictive than is strictly necessary, not purely for safety, but to protect incumbents. This is classic supply-side restriction behavior, bordering on cartel dynamics.
See, for example: https://petrieflom.law.harvard.edu/2022/03/15/ama-scope-of-p...
We see similar behavior in law, but medicine is more insidious. Because medical practice genuinely requires guardrails to prevent harm and quackery, credentialing is non-negotiable. That necessity makes it uniquely easy to smuggle in protectionism under the banner of “safety.”
The result is predictable: restricted supply, elevated wages, and persistently high medical costs. The problem isn’t mysterious, and it isn’t insurance companies. It’s a supply bottleneck created and defended by the profession itself.
Insurance companies aren't innocent angels in this whole scenario either. When the hospital bill fucks them over they don't even blink twice when they turn around and fuck over the patient to bail themselves out. But make no mistake, insurance is the side effect, the profession itself is the core problem.
I tell you this with certainty as a 3rd year medical student: If physician wages go down and tuition stays as is, no one will do this. Intrinsic motivation to help people evaporates as soon as you see how enshittified healthcare in the US has become.
I do agree that medical school is far too restrictive to get into (For MD schools at least). However, if you want to make medical school easier to get into: Where will all those students rotate at for their clinical years? There aren't enough spots in hospitals to jam students in.
Stop taking aim at the people that sacrifice so much to help you. Take aim at the real drivers of healthcare expenditures: administrative bloat.
Where will all those students rotate for their clinical years? There aren’t enough hospital slots.
This is a policy fiction. Residency slots are capped by federal law, not by hospital capacity. The Balanced Budget Act of 1997 froze Medicare-funded residency positions, and despite modest expansions decades later, the cap remains largely intact. Teaching hospitals routinely report excess clinical volume relative to trainee supply. The bottleneck is artificial and regulatory, not logistical.
Stop taking aim at people who sacrifice so much to help you. The real cost driver is administrative bloat.
This framing collapses under scrutiny. Administrative bloat is real and well-documented, but pretending physician incentives are irrelevant requires willful blindness. Numerous studies show that U.S. physicians earn multiples of their OECD peers while delivering no commensurate advantage in outcomes. Many doctors are motivated by altruism, but many are also motivated by status, income, and professional gatekeeping—normal human incentives in a high-prestige, high-pay profession.
Further, high patient throughput is not an accident. Fee-for-service reimbursement structurally rewards volume over care quality. Seeing 20–30 patients a day is not a moral failure of individual doctors, but it does predictably lead to burnout, emotional detachment, and assembly-line medicine. Incentives shape behavior. Ignoring that is not compassion, it’s denial.
Physician reimbursement is only ~9% of national healthcare spending.
That statistic is repeatedly used as a rhetorical shield, and it shouldn’t be. Cost systems do not fail because of a single oversized line item; they fail because multiple protected constituencies simultaneously extract rents while deflecting blame. Administrative overhead, defensive medicine, pharmaceutical pricing, hospital consolidation, reimbursement incentives, and physician compensation are jointly optimized for revenue, not outcomes.
Nine percent of a multi-trillion-dollar system is not trivial. More importantly, physician compensation is not isolated—it drives downstream costs through referral patterns, test ordering, procedure rates, and resistance to scope-of-practice reform. Treating physicians as a sacred class exempt from economic critique is precisely how you end up with a system that is unaffordable, unaccountable, and structurally resistant to reform.
If the argument is “9% is too small to question,” then by that logic no component is ever large enough to examine in isolation, which is how dysfunctional systems persist indefinitely. Real reform requires abandoning moralized narratives and admitting the obvious: healthcare costs are the product of aligned incentives across many actors, and physicians are not magically outside that system simply because the story is uncomfortable.
We are yet to see major, nationwide physician strikes. If that is what it will take for society to realize the value provided, so be it. Without physicians, there is very little healing going on. You can't say the same for so many other roles.
This argument doesn’t make sense to me. Insurance companies are structurally incentivized to minimize payouts across the board. They want hospital bills lower, physician compensation lower, and patient payouts as small as possible. If insurers had unilateral power, total medical spending would collapse, not explode.
They absolutely do not.
They have their profit levels capped at 15% by law and regulation. That means if the insurer wants more absolute dollars of profit, prices must go up.
It also means that if they push prices down they necessarily have less funding to administer those plans, even if the needs are the same (same number of belly buttons, same patient demographics and state of health).
As you note there's also other variables, but this claim: "Insurance companies are structurally incentivized to minimize payouts across the board" is absolutely and categorically not so.
Eg your argument would predict that healthcare price inflation is not as bad in areas with less insurance coverage. Eg for dental work (which is less often covered as far as I can tell), for (vanity) plastic surgery, or we can even check healthcare price inflation for vet care for pets.
Pets typically don't have medical insurance, and any insurance that does exist there has a radically different regulatory regime than for humans.
Since 1980 for the US:
CPI has gone up by 3.16% on average per year (x4.17 in total). Human healthcare costs by 4.9% per year (x8.96 in total). And pet healthcare costs by 6.49% (or x17.87 in total).
It's similar to how AI data center buildout race is raising the prices for consumer electronics in 2026 and beyond. The suppliers have no incentive to sell lower cost products to tiny niche
But dental and vanity cosmetic surgery have gone up by that metric. Dental is less covered by insurance for most people. Vanity cosmetic insurance is covered for almost no one.
Vet care for pets has gone up a lot more than healthcare for humans.
Profit isn't even a big part of the overall revenue.
Mandate at least decent minimal coverage standards
I assume you want higher coverage standards than what currently exists? Independently of whether that would be the morally right thing to do (or not), it would definitely increase prices.
and large insurance pools that must span age groups and risk groups.
Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
Why does your insurance need a pool? An actuary can tell you the risk, and you can price according to that. No need for any pooling. Pooling is just something you do, when you don't have good models (or when regulations forces you).
Wuh? The more diverse the pool, the lower the risk. Your way of thinking will very quickly lead to "LiveCheap: the health insurance for fit, healthy under 30s only" for dollars a month, and "SucksToBeYou: the health insurance for the geriatric and chronically disabled" for the low low cost of "everything you have to give".
There's insurance which allows you to convert an uncertain danger into a known payment. And then there's welfare and redistribution.
By all means, please run some means testing and give the poor and sick or disabled extra money. Or even just outright pay their insurance premiums.
But please finance that from general taxation, which is already progressive. Instead of effectively slapping an arbitrary tax on healthy people, whether they be rich or poor. And please don't give rich people extra stealth welfare, just because they are in less than ideal health, either.
Just charge people insurance premiums in line with their expected health outcomes, and help poor people with the premiums using funds from general taxation. (Where poor here means: take their income and make an adjustment for disability etc.)
We _want_ the guy who loses 5kg and gives up smoking to get lower insurance premiums. That's how you set incentives right.
The more diverse the pool, the lower the risk.
No. The diversification comes from the insurance company running lots of uncorrelated contracts at the same time and having a big balance sheets. For that, it doesn't matter whether it's a pool of similar insurance contracts, or whether they have bets on your insurance contract, and on the price of rice in China, and playing the bookie on some sports outcomes etc. In fact, the more diversified they are, the better (in principle).
But that diversification is completely independent of the pricing of your individual insurance contract.
Have a look at Warren Buffett's 'March Madness' challenge, where he famously challenges people to predict all 67 outcomes of some basketball games to win a billion dollar. Warren Buffet ain't no fool: he doesn't need a pool, he can price the risk of someone winning this one off challenge.
More generally, have a look at Prize indemnity insurance https://en.wikipedia.org/wiki/Prize_indemnity_insurance which helps insure many one-off events.
In any case, what you are saying is only true, if you buy your health insurance second to second on the spot market.
Insurance companies are more than happy to enter long running contracts, where you both agree today on (the algorithm for) the premiums for the next twenty years or even until the rest of your life. That's pretty common with life insurance and disability insurance already.
The above already exists, but if you allow some speculation: you could even envision people buying insurance for their kids before conceiving them. That way you don't have to worry about pre-existing conditions.
(Well, if the parents already have heritable conditions that would make the kids more likely to have expensive medical problems, those would push up their premiums. But then: perhaps these people should think twice about burdening a potential kid with these issues.
Compare how in Cyprus where sickle cell anemia is prevalent, even the Catholic church demand you get screened, before they'll marry you.)
If you really want specialised in-kind welfare, you can get people a voucher for the catastrophic version of 'unconceived baby insurance'.
Basically, you can buy insurance against insurance premiums being expensive.
large insurance pools that must span age groups and risk groups.
What you describe (community rating) has been tried and it works. But it requires that a lot of young, healthy people enroll, and seniors receive most of the care. In an inverted demographic pyramid like most Western economies have, this is a ticking time bomb, so costs will continue to rise.
Mandate at least decent minimal coverage standards
I think a better solution is to allow the government to threaten in negotiating prices with companies as Canada does; it greatly reduces rent-seeking behavior by pharmaceutical companies while allowing them to continue earning profits and innovating. (I understand a lot of the complaints against big pharma but they are actually one of the few sectors of the economy that doesn't park their wealth and actually uses it for substantive R&D, despite what the media will tell you, and countless lives have been saved because of pharma company profits)
Essentially the gist of what I'm saying, as someone who has been involved with and studied this industry for the better part of five years, is that it's much more complex than what meets the eye.
Even SpaceX's vaunted "disruption" is just clever resource allocation; despite their iterative approach to building rockets being truly novel they're not market disruptors in the same way SV usually talks about them. And their approach has some very obvious flaws relative to more traditional companies like BO, which as of now has a lower failure-to-success ratio.
I don't think you'll find many providers clamoring for an AI-assisted app that hallucinates nonexistent diseases, there are plenty of those already out there that draw the ire of many physicians. Where the industry needs to innovate is in the insurance space, which is responsible for the majority of costs, and the captive market and cartel behavior thereof means that this is a policy and government issue, not something that can be solved with rote Silicon Valley style startup-initiated disruption; that I would predict would quickly turn into dysfunction and eventual failure.
Enshittification has done a lot of damage to the concept of "disrupting" markets. It's DOA in risk-averse fields.
If you don't have a medical issue and an AI system tells you this then you save yourself a trip to a specialist and the associated diagnostic tests. Again, this saves a bit of money but is nowhere near the bulk of medical expenses. And it has to be able to do this without any diagnostic testing, just based off of your reported symptoms.
Even if AI diagnosis works flawlessly we save a bit of money but absolutely do not revolutionize the cost of the industry.
It'll be great at first while in development. But when profits need to be generated, seeing a specialist will get harder. There will be less wiggle room. I predict we will see more GP utilization.
It's the self-driving cars debate all over again.
Before medical school, I was not so sure of the quality of your average doc. Now having spent a year in clinical practice across various settings, I am extremely reassured. I can say with certainty that a US trained doctor is miles ahead of AI right now. The system sucks really bad though and forces physicians to churn patients, giving the impression that physicians don't pay attention/don't care/etc.
If we could get healthcare to that level, it would be great.
For a less extreme example: Wal-Mart and Amazon have made plenty of people very rich, and they charge customers for their goods; but their entrance into the markets have arguable brought down prices.
And why do customers come back to shop there?
Customers continue shopping there because human beings are typically incapable of accepting a short-term loss (higher price) for a long-term gain (product lasts more than three uses).
We know that from observing evidence such as how much the government pays out in welfare to Wal-Mart employees.
That's a weird metric. If tomorrow Wal-Mart laid off all employees and replaced them with robots, they would surely be worse off, but by your metric Wal-Mart would look less evil?
Customers continue shopping there because human beings are typically incapable of accepting a short-term loss (higher price) for a long-term gain (product lasts more than three uses).
Groceries typically only last one use.
Likewise, I would not use my flippant 3 times metric regarding durability to cover the quality of produce.
You have to look at the counterfactual of what these people would do, if Wal-Mart weren't around. You seem to implicitly assume that they'd be getting higher paying jobs somewhere else (so they wouldn't have to rely on welfare)? If so, what's stopping those people from switching to these better jobs right now, even while Wal-Mart is still around?
And sure, let's disregard how many times you can eat your groceries. That was a cheap shot. However I think quality vs price trade-off is something customers have to make for themselves anyway. Who am I to judge their choices?
It should be a regulated utility like electricity or railroads, we should have a public alternative like the post office is to UPS, or it should be nationalized.
I agree that electricity and railroads should be regulated like Google Search.
It's really weird that snail mail in the US is a government monopoly. When even social democratic Germany managed to privatise them.
The situation gets more dire when you consider their browser monopoly.
Don't a lot of people in the US use iPhones? They don't ship with Chrome as the default browser, do they?
(And yes, Safari is built on top of the same open source engine as Chrome. But you can hardly call using the same open source project a 'monopoly'. Literally anyone can fork it.)
There's also plenty of other browsers available.
A public mail service is required by our constitution. It's cheaper than the private options and often the only option for many rural areas. It's not a monopoly.
A public mail service is required by our constitution.
Where does it say so in your constitution? All I can find is the postal clause which Wikipedia summarises as follows, but whose full text isn't much longer:
Article I, Section 8, Clause 7, of the United States Constitution, the Postal Clause, authorizes the establishment of "post offices and post roads"[1] by the country's legislature, the Congress.
https://en.wikipedia.org/wiki/Postal_Clause
The Postal Clause certainly allows the government to run a public postal service, but I don't see how the constitution _requires_ it. It doesn't even require the federal government to regulate postal services, it merely allows it.
Perhaps I missed something?
It's cheaper than the private options and often the only option for many rural areas.
If you want to subsidise rural areas, I would suggest to do so openly, transparently and from general taxation. At least general taxation is progressive etc. Instead of just making urban folks pay more for their mail, whether they be rich or poor.
I would also suggest only subsidising poor rural areas. Rich rural areas don't need our help.
It's not a monopoly.
Compare and contrast what USPS has to say https://about.usps.com/universal-postal-service/universal-se...
Google's founders can buy all the yachts they could possibly eat, yet Google Searches are offered for free.
Google searches cost many billions of dollars: your confusion is because the customer isn’t the person searching but the advertisers paying to influence them. Healthcare can’t work like that not just because the real costs are both much higher and resistant to economies of scale but, critically, there aren’t people with deep pockets lining up to pay for you to be healthy. That’s why every other developed country sees better results for less money: keeping people healthy is a social good, and political forces work for that better than raw economic incentives.
And many single payer systems around the world only appear to work as well as they do because the US effectively subsidizes medical costs through its own out of control prices.
Yeah, because we saw what a great job the tech bros did making government more efficient.
Lack of providers isn’t what’s driving up costs.
9% might also seem pretty big to me if it's out of all spending and doesn't include other provider compensation? What if overall healthcare costs went down, but physician compensation stayed the same? Would that then be a problem because it was an increased proportion of the total costs — fat left to be trimmed, so to speak?
There are many problems that don't have anything to do with providers per se, but I also don't think you can glean much by extrapolating to more of the same, especially compensation per se.
We don’t have to extrapolate from physician compensation though. We know that providers per capita have increased, but costs have continued to skyrocket. Therefore a lack of providers is not the immediate cause of the increase.
In addition to increasing the number of providers, the scope of practice for non-physician providers has almost universally increased.
All of this doesn’t prove that increasing the number of physicians wouldn’t lower costs some amount, but it does show that the increases over the last 20-30 years requires some other explanation.
https://www.fda.gov/medical-devices/digital-health-center-ex...
How do you suggest to deal with Gemini?
Don't. I do not ask my mechanic for medical advice, why would I ask a random output machine?
This is really not that far off from the argument that "well, people make mistakes a lot, too, so really, LLMs are just like people, and they're probably conscious too!"
Yes, doctors make mistakes. Yes, some doctors make a lot of mistakes. Yes, some patients get misdiagnosed a bunch (because they have something unusual, or because they are a member of a group—like women, people of color, overweight people, or some combination—that American doctors have a tendency to disbelieve).
None of that means that it's a good idea to replace those human doctors with LLMs that can make up brand-new diseases that don't exist occasionally.
- Bihar teen dies after ‘fake doctor’ conducts surgery using YouTube tutorial: Report - https://www.hindustantimes.com/india-news/bihar-teen-dies-af...
- Surgery performed while watching YouTube video leaves woman dead - https://www.tribuneindia.com/news/uttar-pradesh/surgery-perf...
- Woman dies after quack delivers her baby while watching YouTube videos - https://www.thehindu.com/news/national/bihar/in-bihar-woman-...
Educating a user about their illness and treatment is a legitimate use case for AI, but acting on its advise to treat yourself or self-medicate would be plain stupidity. (Thankfully, self-medicating isn't as easy because most medication require a prescription. However, so called "alternate" medicines are often a grey area, even with regulations (for example, in India).
This "random output machine" is already in large use in medicine so why exactly not?
Where does "large use" of LLMs in medicine exist? I'd like to stay far away from those places.
I hope you're not referring to machine learning in general, as there are worlds of differences between LLMs and other "classical" ML use cases.
But your focus on the existence of this course as your only piece of evidence is evidence enough for me.
This "random output machine" is already in large use in medicine
By doctors. It's like handling dangerous chemicals. If you know what you're doing you get some good results, otherwise you just melt your face off.
Should I trust the young doctor fresh out of the Uni
You trust the process that got the doctor there. The knowledge they absorbed, the checks they passed. The doctor doesn't operate in a vacuum, there's a structure in place to validate critical decisions. Anyway you won't blindly trust one young doctor, if it's important you get a second opinion from another qualified doctor.
In the fields I know a lot about, LLMs fail spectacularly so, so often. Having that experience and knowing how badly they fail, I have no reason to trust them in any critical field where I cannot personally verify the output. A medical AI could enhance a trained doctor, or give false confidence to an inexperienced one, but on its own it's just dangerous.
In principle we can just let anyone use LLM for medical advice provided that they should know LLMs are not reliable. But LLMs are engineered to sound reliable, and people often just believe its output. And cases showed that this can have severe consequences...
- Yes. All doctors advice should be taken cautiously, and every doctor recommends you get a second opinion for that exact reason.
Why are these products being put out there for these kinds of things with no attempt to quantify accuracy?
In many areas AI has become this toy that we use because it looks real enough.
It sometimes works for some things in math and science because we test its output, but overall you don't go to Gemini and it says "there's a 80% chance this is correct". At least then you could evaluate that claim.
There's a kind of task LLMs aren't well suited to because there's no intrinsic empirical verifiability, for lack of a better way of putting it.
"Whether we like it or not" is LLM inevitabilism.
>Argument By Adding -ism To The End Of A Word
Counterpoint: LLMs are inevitable.Can't put that genie back in the bottle, no matter how much the powers-that-be may wish. Such is the nature of (technological) genies.
The only way to 'stop' LLMs is to invent something better.
Realistically, sign a EULA waiving your rights because their AI confabulates medical advice
How do you suggest to deal with Gemini?
With robust fines based on % revenue whenever it breaks the law, would be my preference. I'm nit here to attempt solutions to Google's self-inflicted business-model challenges.
I have the capacity to know when it is wrong, but I teach this at university level. What worries me, are the people who are on the starting end of the Dunning-Kruger curve and needed that wrong advice to start "fixing" the spaces where this might become a danger to human life.
No information is superior to wrong information presented in a convincing way.
I'm pretty good at reading the original sources. But what I don't have in a lot of cases is a gut that tells me what's available. I'll search for some vague idea (like, "someone must have done this before") with the wrong jargon and unclear explanation. And the AI will... sort of figure it out and point me at a bunch of people talking about exactly the idea I just had.
Now, sometimes they're loons and the idea is wrong, but the search will tell me who the players are, what jargon they're using to talk about it, what the relevant controversies around the ideas are, etc... And I can take it from there. But without the AI it's actually a long road between "I bet this exists" and "Here's someone who did it right already".
In this case, all that matters is that the outputs aren't complete hallucination. Once you know the magic jargon, everything opens up easily with traditional search.
I’ve had it ‘dream’ up entire fake products, APIs, and even libraries before.
A couple hours later I decided to ask an LLM if it could tell me. It quickly answered, giving the same reason that I had guessed in my HN comment.
I then clicked the two links it cited as sources. One was completely irrelevant. The other was a link to my HN comment.
Now that AI summaries exist, I have to scroll past half a page of result and nonsense about a Turkish oil company before I find the item I'm looking for.
I hate it. It's such a minor inconvenience, but it's just so annoying. Like a sore tooth.
Well, some redditor had posted a comparison of a much later book in the series, and drawn all sorts of parallels and foreshadowing and references between this quite early book I was looking for and the much later one. It was an interesting post so it had been very popular.
The AI summary completely confused the two books because of this single reddit post, so the summary I got was hopelessly poisoned with plot points and characters that wouldn't show up until nearly the conclusion. It simply couldn't tell which book was which. It wasn't quite as ridiculous as having, say, Anakin Skywalker face Kylo Ren in a lightsaber duel, but it was definitely along those same lines of confusion.
Fortunately, I finished the later book recently enough to remember it, but it was like reading a fever dream.
Sadly, the resource didn't actually exist. It would have been perfect if it did, though!
At some point in time when asked how many Kurdish people live in Poland, Google's AI would say that several million, which was true, but only in a fantasy world conjured by a certain LARP group, who put a wiki on fandom.com.
And are you sure it's giving you good info? "AI" is famously subject to hallucinations, so you may not be getting the "good info" you think you're getting. Be careful with "AI", it's not an all-seeing-all-knowing infallible oracle.
-Med student
in a way, all overconfident guessing is a better match for the result than hallucination or fabrication would be
"confabulation", though, seems perfect:
“Confabulation is distinguished from lying as there is no intent to deceive and the person is unaware the information is false. Although individuals can present blatantly false information, confabulation can also seem to be coherent, internally consistent, and relatively normal.”
https://en.wikipedia.org/wiki/Confabulation
* insofar as “guess” conveys an attempt to be probably in the zone
I wonder how accurate it is.
One I heard was if you make it to 80 you have a 50% chance to make it to 90. If you make it to 90 you have a 50% chance to make it to 95. From 95 to 97.5 again 50% chance. That for the general population, in a 1st world country though, not any individual.
Your genome is very complex and we don’t have a model of how every gene interacts with every other and how they’re affected by your environment. Geneticists are working on it, but it’s not here yet.
And remember that 23andMe, Ancestry, and most other services only sequence around 1% of your genome.
Part of genetics is pattern matching, and last time I checked I still can't find a model that can correctly solve hard Sudokus (well, assuming you don't pick a coding model that writes a Sudoku solver.. maybe some of them are trying to do genetics by doing correct algorithms), a trivial job if you write a program that is designed to do it.
I have a whole genome and nothing Google has built has been able to do anything useful with it, medically speaking. I could use DeepVariant to re-map all the raw reads, it would only slightly increase the accuracy of the estimate of my genome sequence. When I met with genetic counselors, they analyzed my genome and told me I had no known markers for any disease (and they also told me they Google all the unique variants that show up in the report).
(for what it's worth, I literally went to work at Google to improve their medical/health/genomics research, and after working on it a few years I concluded that the entire genomics field is about 90% fantasy. If you want actionable data, there are a small number of well-regulated tests that can help in a limited set of circumstances, but those aren't whole genome tests).
So interesting to see the vastly different approaches to AI safety from all the frontier labs.
Aren't they both searching various online sources for relevant information and feeding that into the LLM?
-Med student
Also try "health benefits of circumcision"...
Going offtopic: The "health benefits of circumcision" bogus has existed for decades. The search engines are returning the results of bogus information, because the topic is mostly relevant for its social and religious implications.
I am related with the topic, and discussion is similar to topics on politics: Most people don't care and will stay quiet while a very aggresive group will sell it as a panacea.
Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.
Notice how little this sentence says about whether anything is any good.
Like you could have a few days of -3C, for today it goes up to +5C, and the "AI Weather report" tells you it's going to be a chilly day or something.
I never saw this feature provide any useful information whatsoever.
There are a ton of misses. Especially on imaging. LLMs are not ready for consumer-facing health information yet. My guess is ~ 3-5 years. Right now, I see systems implementing note writing with LLMs, which is hit or miss (but will rapidly improve). Physicians want 1:1 customization. Have someone sit with them and talk through how they like their notes/set it up so the LLMs produce notes like that. Notes need to be customized at the physician level.
Also, the electronic health records any AI is trained on is loaded to the brim with borderline fraud/copy paste notes. That's going to have to be reconciled. Do we have the LLMs add "Cranial Nerves II-X intact" even though the physician did not actually assess that? The physician would have documented that... No? But then you open up the physician to liability, which is a no go for adopting software.
Building a SaaS MVP that's 80% of the way there? Sure. But medicine is not an MVP you cram into a pitch deck for a VC. 80% of the way there does not cut it here, especially if we're talking about consumer facing applications. Remember, the average American reads at a 6th grade reading level. Pause and let that sink in. You're probably surrounded by college educated people like yourself. It was a big shock when I started seeing patients, even though I am the first in my family to go to college. Any consumer-facing health AI tool needs to be bulletproof!!
Big Tech will not deliver us a healthcare utopia. Do not buy into their propaganda. They are leveraging post-pandemic increases in mistrust towards physicians as a springboard for half-baked solutions. Want to make $$$ doing the same thing? Do it in a different industry.
‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk: https://www.theguardian.com/technology/2026/jan/11/google-ai...
Oh, and also, the Ars article itself still contains the word "Some" (on my AB test). It's the headline on HN that left it out. So your complaint is entirely invalid: "Google removes some AI health summaries after investigation finds “dangerous” flaws"
But Alas, infinite growth or nothing is the name of the game now.
[1] Well, not entirely thanks to people investigating.
It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.