A recent class action complaint filed against UnitedHealth Group, Inc., and its subsidiaries in the U.S. District Court of Minnesota alleges that the company is improperly using an artificial intelligence system called the nH Predict Model to wrongfully deny Medicare Advantage patients access to needed post-acute care under their insurance policies.
Although there are no causes of action specific to artificial intelligence in the Amended Complaint (“Complaint”) filed by the Plaintiffs on April 5, 2024, the Defendants’ use of the nH Predict AI Model is nevertheless mentioned in each cause of action and is at the core of the Plaintiffs’ allegations.
Accordingly, this article examines the relationship between artificial intelligence and the causes of action alleged by the Plaintiffs. It is important to note that the analysis is limited to the specific circumstances and use case of the AI model as described by the Plaintiffs and should not be construed as a broader commentary on the role of AI in healthcare insurance decisions. The article aims to provide insight into how the alleged misuse of this particular AI system relates to the legal claims asserted, based on the information provided in the Complaint.
ABOUT THE NH PREDICT MODEL
The nH Predict AI Model was developed by Defendant naviHealth, Inc., a wholly owned subsidiary of Defendant UnitedHealth Group, Inc. According to the Complaint, the creator of the model “specifically intended for it to save insurance companies money in the post-acute setting, which had previously been a highly unprofitable aspect of Medicare Services.”
According to the Complaint, the model has allegedly been deployed in a manner in which it “attempts to predict the amount of post-acute care a patient “should” require, pinpointing the precise moment when Defendants will cut off payment for a patient’s treatment.” It is said to do so by purporting to “compare a patient’s diagnosis, age, living situation, and physical function to similar patients in a database of six million patients it compiled over the years of working with providers to predict patient’s medical needs, estimated length of stay, and target discharge date.”
It is also said to apply a “rigid criteria,” and “unrealistic predictions” when it comes to approval of post-acute care claims and is said to be highly inaccurate with over 90 percent of patient claim denials being reversed through an internal appeal process or by a federal Administrative Law Judge, and over 80 percent of prior authorization request denials reversed on appeal.
BREACH OF DUTIES SAID BE TO BE OWED TO PLAINTIFFS
The Existence of Duties
In examining the relationship between artificial intelligence and the causes of action, first, it is important to recognize that this is a situation in which the Defendants allegedly had various pre-existing obligations, or duties to their policyholders stemming from the health insurance contracts with the Plaintiffs. These obligations are said to have included contractual duties and a fiduciary duty to act in the interests of their policyholders, which together were brought as the first cause of action, and an implied covenant of good faith and fair dealing, which was brought as the second cause of action.
In relation to the first, the Complaint sought to establish that valid agreements were entered into that included offer, acceptance and consideration and that Plaintiffs paid premiums in exchange for Defendants’ issuance of health insurance policies. The Complaint also notes “Defendants’ duty to exercise [their] fiduciary duties to policyholders … and adequately review and inform policyholders prior to claim denial,” under each agreement.
Each insurance agreement is also said to include a provision concerning UnitedHealth Care, Inc., a Co-Defendant in the matter and subsidiary of UnitedHealth Group, stating that “UnitedHealthcare’s Clinical Services Staff and Physicians make decisions on the health care services you receive based on the appropriateness of care and service and existence of coverage.” This language suggests that the Defendants were contractually obligated to have their own medical professionals make individualized coverage determinations based on the specific needs of each patient and the terms of their policy.
The Complaint also states that pursuant to the contracts, “Defendants implied and covenanted that they would act in good faith and follow the law and the contracts with respect to the prompt and fair payment of Plaintiffs’ and Class members’ claims.”
Delegation of Duties to AI as the Alleged Cause of Breach
The Plaintiffs contend that the Defendants breached the above duties by improperly delegating coverage determinations to the nH Predict AI Model. Physicians and other human reviewers were allegedly excluded from the claims assessment process with the Complaint stating that “[e]mployees who deviate from the nH Predict AI Model projections are disciplined and terminated, regardless of whether a patient requires more care” (paragraph 42).
As a specific example, the Complaint alleges that the Defendants, “intentionally limit their employees’ discretion to deviate from the nH Predict AI Model prediction by setting up targets to keep stays at skilled nursing facilities within 1% of the days projected by the AI Model” (paragraph 7).
It must be noted that in this matter, as stated above, the delegation of duties involved is specifically to an AI model that is allegedly designed and deployed based on financial motivations, rather than to seek to fulfill the Defendants duties in good faith. Again, this is said to include the following:
- The model was specifically intended to save insurance companies money in the post-acute setting;
- The model seeks to pinpoint the precise moment when Defendants will cut off payment for a patient’s treatment;
- The model applies a rigid criteria and unrealistic predictions for the approval of post-acute care claims; and
- The model is said to be highly inaccurate and known by Defendants to be so.
By constraining the role of physicians and other health care professionals in the claims review process in favor of the nH Predict Model allegedly developed and designed around financial motivations, the Defendants are said to have failed to provide the individualized, expert-driven assessment required by their contractual and fiduciary duties as well as their implied duties of good faith and fair dealing.
Human Oversight as a Requirement for the Use of AI Models
While it is alleged that human review was purposefully removed from the claims review process in the matter at hand, it can be seen from various sources that human oversight of artificial intelligence models is often required or recommended.
As a recent example, a March 28, 2024 White House Office of Management and Budget Memorandum concerning Federal agency use of artificial intelligence requires agencies to “ensure adequate human training and assessment” for AI operators to effectively interpret AI outputs and manage risks, as well as to “provide additional human oversight, intervention, and accountability” for AI-driven decisions or actions that could significantly impact rights or safety.
Similarly, a World Health Organization Guidance concerning the ethics and governance of large multi-modal AI models published on January 18, 2024, stresses that “humans should remain in control of health-care systems and medical decisions” and that “regulatory principles [should be] applied upstream and downstream of the algorithm by establishing points of human supervision.”
Again, in contrast, the Complaint alleges that the Defendants failed to provide adequate human oversight in their use of the nH Predict AI Model, instead allowing the AI to make coverage determinations without sufficient individualized review by medical professionals.
Whether the nH Predict Model could have fulfilled the fiduciary duties is unanswered by the Complaint
Given the Plaintiffs assertions that the nH Predict Model was both designed and deployed around financial motivations rather than the fulfillment of the Defendants duties, it can be noted that the Complaint does not fully explore or answer the broader question of whether an AI model could be capable of properly fulfilling the Defendants’ contractual, fiduciary, and implied duties.
This leaves open the broader question of the appropriate role, if any, of artificial intelligence in the health insurance claims review process. Resolving this question requires further legal and policy analysis beyond the scope of the specific allegations found in the Complaint.
CAUSES OF ACTION BASED ON LEGAL STANDARDS APPLICABLE TO THE INSURANCE INDUSTRY
In addition to the causes of action under which the Plaintiffs are said to be owed a duty resulting from the insurance contracts, there are also causes of action related to legal standards applicable to the insurance industry and include:
- “Insurance Bad Faith” under multiple states’ insurance laws;
- Negligence Per Se under Oregon’s Unfair Claim Settlement Practices Act;
- Unfair and Deceptive Insurances Practices under Minnesota Law; and
- Certain aspects of California’s Unfair Competition Law.
These causes of action have some similarities to those imposing duties on the Defendants towards the Plaintiffs, but also tend to go further into intentional actions that are harmful to those protected under the underlying laws.
According to the Complaint, the states with Bad Faith Insurance Claim laws “prohibit using bad faith or unreasonable means to make coverage determinations under an insurance policy.” Similarly, a violation of the Oregon Unfair Settlement Practices Act includes “misrepresenting facts or policy provisions in setting claims,” and the Unfair Competition Law claim under California law has a “Fraudulent prong” to it.
It can be noted that under these causes of action, the alleged intentional misuse of the nH Predict Model described above is expressly acknowledged by the Plaintiffs as warranting their own causes of action and the Complaint states that “Defendants’ denial of Plaintiffs’ and Class member’s claims was based on the malicious implementation of the nH Predict AI Model intended to enable Defendants to deny as many claims as possible and to pay out as little as possible.”
This was said to contribute to the failing of insurance industry standards because, depending on the applicable state, the following would apply:
- The use of the nH Predict AI Model to make coverage determinations constituted a process involving bad faith that didn’t allow for a reasonable basis for Defendants to deny the claims;
- The use of the AI model was substituted for conducting reasonable investigations based on all available information and resulted in refusals to pay claims and such behavior was intentional or reckless and in bad faith;
- The Defendants knew or had reason to know that the nH Predict system was an inadequate method for deciding to deny claims and knew or had reason to know that the nH Predict AI Model was not a reasonable basis to deny claims; and,
- The Defendants’ omissions and misrepresentations regarding the Medicare Advantage policies and Plaintiffs’ and Class Members’ rights under their policies, including by using an algorithm to make coverage determinations and denying claims on sham pretenses, were likely to deceive a reasonable consumer.
According to the Plaintiffs, once a claim was denied, the Defendants also failed to “promptly provide reasonable explanations for [such] denials,” further contributing to the failure to follow laws applicable to the insurance industry.
These causes of action and their related allegations make the distinction between the current model said to have been intentionally designed and deployed in a bad faith manner and a hypothetical AI model designed and deployed in a good faith effort to meet the Defendants’ duties and/or industry standards even more clear, once again pointing to the need for a broader policy examination.
DECLARATORY AND INJUNCTIVE RELIEF
Finally, in addition to duty-related and industry standards-based causes of action, the Plaintiffs are seeking declaratory and injunctive relief that would enjoin the Defendants from continuing “improper and unlawful claim handling practices” as set forth in the Complaint.
According to an Order entered by the Court, the Defendants’ Answers to the Amended Complaint are due on or before May 20, 2024, and as the case proceeds, it will be important to follow not only the resolution of the specific claims, but the court’s analysis of the appropriate guardrails for AI tools in this context.
Categories: AI
Leave a Reply