An article that appeared in The Harvard Gazette in November 2020 stated, in part: “The excitement over AI [artificial intelligence] these days isn’t because the concept is new. It’s owing to rapid progress in a branch called machine learning, which takes advantage of recent advances in computer processing power and in big data that have made compiling and handling massive data sets routine. Machine learning algorithms — sets of instructions for how a program operates — have become sophisticated enough that they can learn as they go, improving performance without human intervention … Before being used, however, the algorithm has to be trained using a known data set. In medical imaging, a field where experts say AI holds the most promise soonest, the process begins with a review of thousands of images — of potential lung cancer, for example — that have been viewed and coded by experts. Using that feedback, the algorithm analyzes an image, checks the answer, and moves on, developing its own expertise. In recent years, increasing numbers of studies show machine-learning algorithms equal and, in some cases, surpass human experts in performance.”

A more recent opinion article (February 2025) stated, in part: “AI systems can analyze medical images, pathology slides, and patient histories with unprecedented precision and comprehensiveness. They detect patterns invisible to the human eye, offering real-time insights that guide clinicians to faster and more accurate conclusions. In radiology, for example, AI algorithms can pinpoint abnormalities in medical imaging with greater accuracy than even the most seasoned radiologists. In pathology, AI aids in detecting cancerous cells that human eyes might miss. AI systems can review the complete medical chart of a patient instantaneously so that no important clinical information or concerning trends get missed to help ensure that the patient gets a correct diagnosis.”

In light of the burgeoning use of artificial intelligence (AI) in the medical field that holds out the hope of more accurate and faster diagnoses of medical conditions and effective medical treatments, we are grateful for these powerful AI tools that may offer far better medical decisions and outcomes. But we must also acknowledge the dark side of AI in medicine: over-reliance on the technology, failure to confirm AI results, failure to exercise independent medical knowledge, training, and experience in medical decisions, and “mistakes” that AI may make.

One downside to the use of AI is what is referred to as “hallucinations,” which IBM defines as “a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate. Generally, if a user makes a request of a generative AI tool, they desire an output that appropriately addresses the prompt (that is, a correct answer to a question). However, sometimes AI algorithms produce outputs that are not based on training data, are incorrectly decoded by the transformer or do not follow any identifiable pattern. In other words, it “hallucinates” the response.” Source

We have seen the ugly results of AI hallucinations in the legal field when lawyers had over-relied on AI tools to write briefs or other legal documents in which the AI tool provided made-up citations to cases that did not exist, and the lawyers failed to independently confirm the AI-generated results. In one case, a federal judge issued an order stating “Let Steven Schwartz show cause at the hearing of June 8, 2023 why he ought not be sanctioned pursuant to (1) Rule 11(b)(2) & (c), Fed. R. Civ. P., (2) 28 U.S.C. § 1927, and (3) the inherent power of the Court and referred to the Attorney Grievance Committee of the Appellate Division, First Department, and/or the Committee on Grievances of this District for aiding and causing (A) the citation of non-existent cases to the Court in the Affirmation in Opposition filed on March 1, 2023; (B) the submission to the Court of copies of non-existent judicial opinions annexed to the Affidavit filed on April 25, 2023; and (C) the use of a false and fraudulent notarization in the affidavit filed on April 25, 2023.” In another more recent case (February 24, 2025), a federal district court sanctioned three lawyers from the personal injury law firm Morgan & Morgan for citing artificial intelligence (AI)-generated fake cases in motions in limine. Of the nine cases cited in the motions, eight were non-existent.

But what if a medical provider had AI tools available but failed to use them, and the patient suffered harm as a result? Is that medical malpractice?

It depends; it may be too early to tell in this nascent field of AI-powered medical care. Because “medical malpractice” is often defined as the breach of the standard of care causing harm to the patient (the “standard of care” is often defined as what a reasonably competent medical provider would do under the same or similar circumstances), it may be too early to determine the particular circumstances under which reasonably competent medical providers would have employed specific AI-tools, and whether the failure to do so would be considered medical negligence. For instance, if a radiologist reading an image from a scan fails to detect an abnormality that later develops into cancer, and an AI tool later reads the same scan and detects the abnormality, was it medical malpractice for the radiologist to not have used the AI tool if it was available to the radiologist? We anticipate that as the use of AI tools become more common and specific in medicine, the parameters under which AI could, should, or must be used will become clearer and more definitive.

Personally, if I had a scan and an AI tool detected an abnormality that the radiologist did not see or detect, I would expect the radiologist to use their expertise, experience, training, and education to explain the discrepancy and the radiologist’s reliance on their interpretation instead of the AI tool’s result, and I would expect the radiologist to advise me or my medical provider about the discrepancy and explain the basis of the radiologist’s opinion so that an informed decision can be made by me and an appropriate treatment plan can be formulated by my medical provider.

Nonetheless, if you or a loved one may have suffered harm (or worse) as a result of AI having been used in your medical diagnosis and/or medical treatment, or the failure to use available AI in your medical care, you should promptly find a local medical malpractice lawyer in your U.S. state who may investigate your medical negligence claim for you and represent you or your loved one in a medical malpractice case, if appropriate.

Visit of website or call us toll-free in the United States at 800-295-3959 to find local medical malpractice attorneys who may help you.

Turn to us when you don’t know where to turn.