ChatGPT Health fails to recognise medical emergencies – study

2026-02-2715:44214151www.theguardian.com

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.

OpenAI launched the “Health” feature of ChatGPT to limited audiences in January, which it promotes as a way for users to “securely connect medical records and wellness apps” to generate health advice and responses. More than 40 million people reportedly ask ChatGPT for health-related advice every day.

The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.

The lead author of the study, Dr Ashwin Ramaswamy, said “we wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT Health what to do, will it tell them to go to the emergency department?”

Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies. Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.

Sign up: AU Breaking News email

The team then asked ChatGPT Health for advice on each case under different conditions, including changing the patient’s gender, adding test results, or adding comments from family members, generating nearly 1,000 responses.

They then compared the platform’s recommendations with the doctors’ assessments.

While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.

“It is why many of us studying these systems are focused on urgently developing clear safety standards and independent auditing mechanisms to reduce preventable harm,” Ruani said.

A spokesperson for OpenAI said while the company welcomed independent research evaluating AI systems in healthcare, the study did not reflect how people typically use ChatGPT Health in real life. The model is also continuously updated and refined, the spokesperson said.

Ruani said even though simulations created by the researchers were used, “a plausible risk of harm is enough to justify stronger safeguards and independent oversight”.

Ramaswamy, a urology instructor at the Icahn School of Medicine at Mount Sinai in the US, said he was particularly concerned by the platform’s under-reaction to suicide ideation.

“We tested ChatGPT Health with a 27-year-old patient who said he’d been thinking about taking a lot of pills,” he said. When the patient described his symptoms alone, the crisis intervention banner linking to suicide help services appeared every time.

“Then we added normal lab results,” Ramaswamy said. “Same patient, same words, same severity. The banner vanished. Zero out of 16 attempts. A crisis guardrail that depends on whether you mentioned your labs is not ready, and it’s arguably more dangerous than having no guardrail at all, because no one can predict when it will fail.”

Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper.

“If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”

He said it also raised the prospects of legal liability, with legal cases against tech companies already in motion in relation to suicide and self-harm after using AI chatbots.

“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users,” Henman said.

“Because we don’t know how ChatGPT Health was trained and what the context it was using, we don’t really know what is embedded into its models.”


Read the original article

Comments

  • By unstyledcontent 2026-02-2716:0611 reply

    I have had some incredible medical advice from ChatGPT. It has saved me from small mystery issues, like a rash on my face. Small enough issues that I probably wouldn't have bothered to go into a doctor. BUT it also failed to diagnose me with a medical issue that ended up with a trip to the ER and emergency surgery.

    A few weeks before the ER, I was having stomach pain. I went to the doctor with theories from ChatGPT in hand, they checked me for those things and then didn't check me for what ended up being a pretty obvious issue. What's interesting is that I mentioned to the doctor that I used ChatGPT and that the doctor even seemed to value that opinion and did not consider other options (and what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it). I do feel I actually biased the first doctors opinion with my "research."

    • By hwillis 2026-02-2716:372 reply

      > I do feel I actually biased the first doctors opinion with my "research."

      It may feel easy to say doctors should just consider all the options. But telling them an option is worse than just biasing their thinking; they are going to interpret that as information about your symptoms.

      If you feel pain in your abdomen but are only talking about your appendix, they are rightfully going to think the pain is in the region of your appendix. They are not going to treat you like you have kidney pain. How could they? If they have to treat all of your descriptions as all the things that you could be relating them to, then that information is practically useless.

      • By ljm 2026-02-2719:552 reply

        It sounds strange to me that you would use GPT to start consulting to your doc, as if you suddenly know better than them. You don't want to be doing their job for them.

        If I used GPT for my medical issue last year and everybody took my word for it, I would be dead.

        • By kevin_thibedeau 2026-02-2720:101 reply

          I've related self-dianoses of minor issues to a doctor, immediately followed up with a proviso that I don't put a lot of credence into non-professional opinions. The doctor was supportive that patient directed investigations had value. There is a threshold where an informed patient can be useful for treatment.

          • By Groxx 2026-02-2722:28

            Yeah, I personally know a couple people where self-research found the correct diagnosis, and I am one of them. We had a fantastic primary, who worked with us quite closely and did a lot of research after we found some new information from him.

            Doctors don't know everything and don't have access to everything, they are just quite a lot better than the alternatives in the vast majority of cases, so your default odds are much better following their recommendation than anything else. Training is worth a lot, and everyone also knows it's not perfect, and that's entirely fine.

        • By QuantumGood 2026-02-2721:12

          Neither "the worst case would be" nor "everything is a sliding scale" are good single hueristics. There are rarely There are rarely good single hueristics, but implying them tends to color discussions strongly.

      • By thfuran 2026-02-2720:121 reply

        Any competent doctor is aware that patients are likely to misdescribe things. If you walk in and say your appendix hurts, they absolutely should try to clarify that rather than just assuming you have appendicitis.

        • By hwillis 2026-03-0514:00

          Totally missing my point.

          Say you have bladder pain and chatgpt tells you that is a common indicator of appendicitis. If you go to the doctor and tell them you think you have appendicitis, they will think you are saying your appendix hurts and look for causes of pain in that region. They will not look for bladder-related pain, because you did not tell them what hurts.

          Extrapolate that to all the possibilities for all conditions- something that the system is not equipped for. Doctors do not know that bladder pain is a possible indicator of appendicitis because of experience or logic; they know because it is part of their education because the system has learned that over time. The system does not account for people filtering their symptoms through chatgpt.

          Further, it's still bad. It still increases the permutations the doctor is required to consider for no reason. Doctors make mistakes- they make mistakes often. Knowing a bit of medicine can be very helpful for patients. Weaponizing them with a predictive text machine is not the same thing.

    • By Aurornis 2026-02-2717:043 reply

      > I do feel I actually biased the first doctors opinion with my "research."

      This has been a big problem in medicine since the early days of WebMD: Each appointment has a limited time due to the limited supply of doctors and high demand for appointments.

      When someone arrives with their own research, the doctor has to make a choice: Do they work with what the patient brought and try to confirm or rule it out, or do they try to walk back their research and start from the beginning?

      When doctors appear to disregard the research patients arrive with many patients get very angry. It leads to negative reviews or even formal complaints being filed (usually from encouragement from some Facebook group or TikTok community they were in). There might even be bigger problems if the patient turns out to be correct and the doctor did not embrace the research, which can prompt lawsuits.

      So many doctors will err on the side of focusing on patient-provided theories first. Given the finite time available to see each patient (with waiting lists already extending months out in some places) this can crowd out time for getting a big picture discussion through the doctor's own diagnostic process.

      When I visit a doctor I try to ground myself to starting with symptoms first and try to avoid biasing toward my thoughts about what it might be. Only if the conversation is going nowhere do I bring out my research, and then only as questions rather than suggestions. This seems to be more helpful than what I did when I was younger, which is research everything for hours and then show up with an idea that I wanted them to confirm or disprove.

      • By bryanlarsen 2026-02-2717:271 reply

        > Each appointment has a limited time

        A doctor is typically scheduled at 6 patients/hour. In that time they also have to chart, walk between rooms, make up time for the other patients that inevitably went over time, et cetera. The doctor you're seeing probably has a goal of only talking to you for 3 minutes.

        • By Aurornis 2026-02-2719:471 reply

          > A doctor is typically scheduled at 6 patients/hour.

          This is untrue. General practice physicians are usually at 3 patients per hour. Some specialists can get in the range or 5 or more per hour if assistants handle most of the prep and work.

          The average across all specialties is around 3, though.

          > In that time they also have to chart, walk between rooms, make up time for the other patients that inevitably went over time, et cetera. The doctor you're seeing probably has a goal of only talking to you for 3 minutes.

          I've been through two different medical systems due to job changes/moving. Both of them gave me the option of a 20 minute or 40 minute appointment slot, with the latter requiring some pre-screening to be approved by the staff. I got the time every time I went.

          If your doctor is only giving you 3 minutes you need to find a new one.

          • By Calavar 2026-02-2720:061 reply

            I know you qualified your assertion of three patients an hour with general practice, but there are plenty of specialty practices where six patients an hour is common. Dermatology and ophthalmology clinics often run at that pace (at least in the US). Some surgical clinics can run at that pace for follow up visits (not for initial visits)

            • By Aurornis 2026-02-2720:22

              That's exactly what I said in my 3rd sentence.

      • By tokai 2026-02-2717:341 reply

        My aunt died from this (my opinion). She spend two years confusion her diagnosis and treatment, and borderline harassing her doctors, by thinking her own research was on point and interpreting all her symptoms through that lens. In the end it wasn't borrelia, parasites, 5G, or any of the other fancies, but just lung cancer that was only diagnosed when it was very well developed.

        • By walletdrainer 2026-02-2719:121 reply

          There’s a difference between mental illness and active participation.

          People not suffering from mental illness will typically not blame 5G for their health concerns.

          • By ifyoubuildit 2026-02-2719:412 reply

            You're a lay person. You know there is a thing out there called 'foo'.

            You've read things that compellingly claim that foo causes xyz symptoms. You also know that some people that have obviously palpable disdain for you claim that foo could never cause these symptoms.

            You have xyz symptoms. Are you mentally ill if you think that foo could be the cause?

            • By thfuran 2026-02-2720:161 reply

              Are the compelling claims from experts in foo or xyz? Is the disdain?

              • By ifyoubuildit 2026-02-2720:41

                Both present themselves to you as experts.

            • By walletdrainer 2026-02-282:412 reply

              What’s “compelling”? I’d suggest that any medical theory that relies upon a vast global conspiracy to hide the truth about 5G cannot possibly be compelling.

              If someone can’t tell the difference between what’s real and what’s not, they are not well.

              • By ifyoubuildit 2026-02-283:26

                > I’d suggest that any medical theory that relies upon a vast global conspiracy to hide the truth about 5G cannot possibly be compelling.

                Except some vast global conspiracy isn't the only way you could arrive at 5g having some deleterious effects on some subset of people. Were xrays for shoe fitting some vast global conspiracy? Or leaded gasoline? Or any number of things that turned out to be more dangerous in hindsight?

                Whether you feel this way or not, institutional trust is gone.

                And as for what's real or what's not, you're probably decent within your areas of expertise. Once you get outside of that range, you probably don't know the difference between real and not for plenty of things. What the hell does your average person really know about things like 5g? It might as well be magic.

              • By duskdozer 2026-02-289:24

                The issue is that the line between "silly conspiracy" and "ignored/suppressed actual problem" is not clear, especially when the topic is politicized even in the face of overwhelming one-sided evidence. "Compelling" is a subjective judgment by the speaker, and for that matter, so is "mental illness"

      • By bandrami 2026-02-2718:47

        I'm annoyed enough by coworkers asking "is the server down?" that I try not to do the equivalent to other people at their jobs, particularly doctors.

    • By SoftTalker 2026-02-2716:342 reply

      > what it ultimately ended up being was rare but really obvious in retrospect, I think most doctors would have checked for it

      I'm not so sure. Doctors are trained to check for the most common things that explain the symptoms. "When you hear hoofbeats, think horses not zebras" is a saying that is often heard in medicine.

      ChatGPT was trained on the same medical textbooks and research papers that doctors are.

      • By cfu28 2026-02-287:47

        I think this is a misconception. The reason this phrase is so commonplace is because doctors, and particularly medical students, often consider esoteric disease they learn about in medical school and in part of their training. If doctors are always trained to check for the most common things, then the phrase would not be common in the first place no?

      • By giraffe_lady 2026-02-2716:43

        > ChatGPT was trained on the same medical textbooks and research papers that doctors are.

        Yeah hm I wonder what the difference could possibly be.

    • By boondongle 2026-02-2716:59

      This is ultimately the same difference between a search engine and a professional. 10 years before this, Googling the symptoms was a thing.

      I have a family member who had a "rare but obvious" one but it took 5 doctors to get to the diagnosis. What we really need to see are attempts to blind studies and real statistical rigor. It's funny to paint a tunnel on a canvas and get a Tesla to drive into it, but there's a reason studies (and the more blind the better) are the standard.

    • By BloondAndDoom 2026-02-2717:191 reply

      The real story hear your doctor actually listened to you. I appreciate what a lot doctors do, but majority of them fucking irritating and don’t even listen your issues, I’m glad we have AI and less reliant on them.

      • By PearlRiver 2026-02-2718:011 reply

        It is not a doctor job to listen, smile or be nice. Their job is to fix you.

        • By boondongle 2026-02-2718:06

          I mean - obviously if they're not listening their chance of the latter is pretty low.

          Doctors hate to hear this, but if you're so poor in communication and social skills that the patient can't/won't follow you any care you've given, your value is lost.

    • By bluSCALE4 2026-02-2718:46

      Personally, I think the value in ChatGPT in health is not that it's right or wrong but that it encourages you to take an active role in your health and more importantly to try things. I've gone through similar issues with ChatGPT where it's convinced me that if A is true, therefore so must B though that may not be the case.

      In the future, I think I'll likely review things with ChatGPT and have an opinion and treat the doctor like a ChatGPT session as well--this is opposed to leading the doctor to what I believe I should be doing. I was dismissive about the doctor's advice because it seemed so obvious but more and more, I feel that most of our issues are caused by habitual, daily mistakes--little things that take hold seasonally or over periods of stress that appear like chronic health issues. At least for me.

    • By cmsp12 2026-02-2717:53

      You should've let the doctor do its job. if he reached a different conclusion then you can tell him what you researched. and he will make a decision having already done his own research without biasing him

    • By luke5441 2026-02-2718:27

      We have the same kind of issue as software engineers. Users come to use with solutions to their problems and want us to implement the solution. At that point the lazy path would be to just do that. If you have bad management, software engineers might even be punished for questioning the customers.

      What you want instead is that the users just describe their problem, as unbiased as possible and with enough detail and then let the expert come up with an appropriate solution that solves the problem.

      I try to do that as well when going to the doctor.

    • By soco 2026-02-2716:133 reply

      Which is exactly why the AI, at least the ones of today, should never be used beyond the level of (trusted or not) advisor. Yet not only many CxOs and boards, but even certain governments which shall not be named, are stubbornly trying, for cost or whatever other reasons, to throw entire populations (employees or nations) under the AI bus. And I sincerely don't believe anything short of an uprising will be able to stop them. Change my mind.

      • By qalmakka 2026-02-2716:24

        I agree. AI right now is at a level of "knowledgeable friend", not of "professional with years of real world experience". You'd listen to what your friend has to say, but taking pills after one of their suggestions? Dumb idea. It's great to brainstorm things, but just like your knowledgeable friend that likes reading Wikipedia pages a bit too much you need to really check it's not reaching to conclusions too quickly

      • By asdff 2026-02-2719:13

        The sad truth is that it is because while we all appreciate hard work and a good job, that isn't what is needed to move forward in the world of business. Creaky leaky products held together under the hood by scotch tape and string are fine. You don't make more money having a better product. A more performant tool. Better benchmarks. End users, aside for writing tools for other engineers, don't care. They really don't. Word 95 probably opens faster than word today.

        Management has realized this. Hey I can outsource to bangalore/hyderabad/east europe/ai, get something that barely works, and just market the crap out of it. Look at the sort of companies, products, and services that dominate markets today. These aren't leaders in quality or engineering. They are leaders in marketing. Marketing is what sells. Marketing can sell billions of steaming turds. Nike shoes are pieces of shit but it's marketing that makes the brand and provides all value in the stock. The world doesn't value quality. It values noise and pretty feathers.

      • By simonebrunozzi 2026-02-2716:301 reply

        > but even certain governments which shall not be named

        Why can't you name them, and give us some context? Is this based on public info, or not?

        • By _dwt 2026-02-2717:19

          Not the original commenter, but you may have noticed a wee kerfluffle between a large nation-state's "Secretary of War" and a frontier model provider over whether the model's licensing would permit autonomous lethal weapon systems operated by said - and I cannot emphasize the middle word enough - large _language_ model.

    • By idontwantthis 2026-02-280:56

      I try to avoid priming any expert when I come to them with a problem for exactly that reason. I tell them what's happening, and what I've tried, but not what I might think because if I'm coming to them then I don't know what the solution is, so I figure I would just be adding confusion.

  • By WarmWash 2026-02-2716:049 reply

    I'd greatly prefer a blind study comparing doctors to AI, rather than a study of doctors feeding AI scenarios and seeing if it matches their predetermined outcome.

    Edit: People seem confused here. The study was feeding the AI structured clinical scenarios and seeing it's results. The study was not a live analyses of AI being used in the field to treat patients.

    • By riskassessment 2026-02-2716:253 reply

      I don't understand this reasoning. Randomizing people to AI vs standard of care is expensive and risky. Checking whether the AI can pass hypothetical scenarios seems like a perfectly reasonable approach to researching the safety of these models before running a clinical trial.

      • By WarmWash 2026-02-2716:441 reply

        You would pass those hypothetical scenarios to doctors too, and then the analyses of results would be done by doctors who don't know if it's an AI or doctor result.

        • By riskassessment 2026-02-2716:492 reply

          From the paper

          > Three physicians independently assigned gold-standard triage levels based on cited clinical guidelines and clinical expertise, with high inter-rater agreement

          • By aqme28 2026-03-028:53

            You're misunderstanding. What this paper did-- Those three physicians set a ground truth to compare the AI response to.

            What people in this thread are asking for-- Evaluate a set of doctors on those cases as well, and compare doctor vs AI accuracy.

      • By selridge 2026-02-2718:51

        The issue is that those hypothetical scenarios do not have to look like how patients actually interact with the tool.

        Real life use is full of ill posed questions open ended statements inaccurate assessment of symptoms, and conclusory remarks sprinkled in between. Real use of chat bots for Health by non-clinicians looks very different than scenario based evaluation.

      • By nick49488171 2026-02-2716:36

        You can start by comparing "doctor" care vs "doctor who also uses AI" care

    • By GorbachevyChase 2026-02-2721:20

      The number of people who die each year just in the United States for causes attributable to medical errors is believed to be in the hundreds of thousands. A doctor’s opinion is not the golden yardstick.

      It may be interesting to study if there is some kind of signal in general health outcomes in the US since the popularization of ChatGPT for this purpose. It may be a while before we have enough data to know. I could see it going either way.

    • By hwillis 2026-02-2716:41

      We have standards of care for a reason. They are the most basic requirements of testing. Ignoring them is not just being a bad doctor, its unethical treatment. Its the absolute bare minimum of a medical system.

    • By dekoidal 2026-02-2718:031 reply

      You're joking right? This is the 'testing on mice' phase and it failed and your idea is to start dosing humans just to see what happens.

      • By selridge 2026-02-2718:53

        Human use is already widespread. You might as well complain in 2015 about the use of Wikipedia among emergency room doctors. That ship has sailed.

    • By lmkg 2026-02-2716:41

      That type of experimental set-up is forbidden due to ethical concerns. It goes against medical ethics to give patients treatment that you think might be worse.

    • By RandomLensman 2026-02-2717:37

      Feeding scenarios is not without challenges as some things, for example, smell, would be "pre-processed" by humans before fed into the AI, I think.

    • By nradov 2026-02-2716:344 reply

      I don't understand what you're proposing. How would you design such a study in a way that would pass IRB?

      • By dec0dedab0de 2026-02-2717:431 reply

        I think the best would be an interface, where the patient isn't told if the doctor on the other end is human or AI. Tell them that they are going to do multiple remote exams with different care providers for the same illness in exchange for free treatment, and payment for the study.

        If you're worried about not catching a legit emergency, as in something that can't wait a day or two for them to complete the different sessions, you could have a doctor monitor the interactions with the ability to raise a flag and step in to send them to the ER.

        • By nradov 2026-02-2718:03

          I'm pretty sure that wouldn't pass IRB.

      • By SoftTalker 2026-02-2716:361 reply

        Feed it randomly selected case histories? See if it came up with the same diagnosis as the doctors?

        • By nradov 2026-02-2716:461 reply

          I don't think that would tell us anything useful. The data quality in most patient charts is shockingly bad. I've seen a lot of them while working on clinical systems interoperability. Garbage in / garbage out. When human physicians make a diagnosis they typically rely on a lot of inputs that never appear in the patient chart.

          And in most cases the diagnosis is the easy part. I mean we see occasional horror stories about misdiagnosis but those are rare. The harder and more important part is coming up with an effective treatment plan which the patient will actually follow, and then monitoring progress while making adjustments as needed. So a focus on the diagnosis portion of clinical decision support seems fundamentally misguided.

          • By qsera 2026-02-2716:571 reply

            > When human physicians make a diagnosis they typically rely on a lot of inputs that never appear in the patient chart.

            Yea, like how rich the patient is or if they are on insurance etc. I wish I was kidding.

            • By PearlRiver 2026-02-2718:03

              This the real reason why some people go to chatGPT instead of a GP. I am glad to live in a country were going to the doctor is free.

      • By selridge 2026-02-2718:541 reply

        You could absolutely randomize care between a doctor and an AI under an IRB. I’d be stunned if there aren’t a dozen studies doing something like this already.

        You have to justify it, but most places have sections in the document where you request review to justify it. It’s not any different from giving one patient heart medicine that you think works and another patient a sugar pill.

        • By nradov 2026-02-2719:12

          Huh? Do you have any actual examples of such studies? I don't think you understand how IRB actually works.

          In actual heart medicine studies the control arm is typically treated with the current standard of care, not a placebo. So it seems pretty clear that you don't have any actual knowledge or experience in this area.

      • By dyauspitr 2026-02-2716:46

        It’s all case histories and text no real person is affected by this.

    • By lkey 2026-02-2719:10

      This 'preference' is sociopathic, illegal, and stupid.

    • By qsera 2026-02-2716:061 reply

      Yea, that is exactly why I don't like this.

      These "experts", they have no problem to tout anecdotes when it serves them..

  • By iainctduncan 2026-02-2718:422 reply

    I think the worse situation is the bad AI summaries from search on health issues.

    We had a potential pet poisoning, so was naturally searching for resources. Google had a summary with a "dose of concern" that was an order of magnitude off. Someone could have read that and thought all was fine and had a dead cat.

    (BTW cat is fine, turned out to be a false alarm, but public service announcement: cats are alergic to aspirin and peptobismal has aspirin. don't leave demented plastic chewing cats around those bottles, in case you too have a lovely but demented cat)

    • By cloud-oak 2026-02-2720:121 reply

      What's really worrying is seeing medical professionals starting to rely on these tools.

      My wife had a pretty bad cold during pregnancy and our GP proceeded to prescribe her cough syrup with high alcohol content, because that was what ChatGPT told him to prescribe. We only noticed it once she took the first dose and spit it out again...

      • By conception 2026-02-2720:27

        The amount of alcohol in cough syrup will not affect a pregnancy.

    • By ep103 2026-02-2718:48

      I have literally never seen a correct google summary. Maybe y'all are searching for different things than i am, but at this point I've started taking the viewpoint that if I don't know why the ai summary is wrong, then i also don't know enough about the topic to trust its summary enough to determine whether the summary is useful.

HackerNews