Expert Commentary

The possibilities and perils of AI in the health insurance industry: An explainer and research roundup

US states are starting to form policy rules for the use of AI among health insurers. We’ve created this guide to help journalists understand the nascent regulatory landscape.

AI in health insurance
(Gerd Altmann from Pixabay)

As artificial intelligence infiltrates virtually every aspect of life, more states in the U.S. are seeking to regulate (or at least monitor) its use. Many are passing legislation, issuing policy rules or forming committees to inform those decisions. In some cases, that includes health insurance, where AI holds great promise to speed and improve administration but also brings potential for peril, including racial bias and omissions inherent in formulas used to determine coverage approvals.

Meanwhile, major health insurers Humana, Cigna, and UnitedHealth all face lawsuits alleging  that the companies improperly developed algorithms that guided AI programs to deny health care. The suit against Cigna followed a ProPublica story revealing “how Cigna doctors reject patients’ claims without opening their files.” The class action suits against United Health and Humana followed an investigative series by STAT, in which reporters revealed that multiple major health insurers had used secret internal rules and flawed algorithms to deny care.

Journalists should pay attention to guardrails governments are seeking to erect to prevent problematic use of AI — and whether they’ll ultimately succeed as intended. Both federal and state governments report they are working to prevent discrimination, a broad concern as AI systems become more sophisticated and help administrators make decisions, including what’s covered by a policy. Proposed state legislation and regulatory guidelines aim to require health insurance companies to be more transparent about how their systems were created, what specific data sets are fed into those systems and how the algorithms that instruct a program’s decision-making are created.

We’ve created this guide to help journalists understand the nascent regulatory landscape, including proposed state laws; which regulators are compiling and issuing guidelines; and what researchers have learned so far.                                       

Government efforts to regulate AI use among health insurers

Who regulates health insurers, and how, depends largely on the type of health insurance itself. Congress and the Biden administration are stepping up efforts to form a blueprint for AI use, including in health insurance.

For Medicaid, a government program serving as the largest source of health coverage in the U.S., each state and the District of Columbia and U.S. territories operate their own program within federal guidelines.

The Centers for Medicare and Medicaid Services has helpful overview summaries of each program.

Federal Medicaid guidelines are broad, allowing states, territories and Washington D.C. flexibility to adapt. State reports to CMS about their Medicaid programs are a good source for story ideas. CMS’ State Waiver Lists website posts many documents of interest.

In January, for example, CMS issued a final rule that includes requirements for using management tools for prior authorization for the federal programs, an area where AI use is of increasing concern.

Prior authorization is a process requiring a patient or health care provider to get approval from a health insurer before receiving or providing service. (This 2021 guide to prior authorization from The Journalist’s Resource helps explain the process.)

While CMS notes in the body of the prior authorization final rule that it does not directly address the use of AI to implement its prior authorization policies, the rule states that “we encourage innovation that is secure; includes medical professional judgment for coverage decisions being considered; reduces unnecessary administrative burden for patients, providers, and payers; and involves oversight by an overarching governance structure for responsible use, including transparency, evaluation, and ongoing monitoring.”

CMS also issued a memo in February 2024 tied to AI and insurer-run Medicare Advantage, a type of federal health plan offered by private insurance companies that contract with Medicare.

AI tools can be used to help in making coverage decisions, but the insurer is responsible for making sure coverage decisions comply with CMS rules, including those designed to prevent discrimination, the memo notes.

In the U.S., individual states regulate many commercial health plans as well as set a large portion of the rules for their federal Medicaid programs.

About two-thirds of Americans are covered by commercial plans through their employers or private insurance, according to the U.S. Census.

State-level resolutions and legislation

For local journalists, this complex landscape provides an avenue rich with potential reporting opportunities.

According to the National Conference of State Legislatures, at least 40 states introduced or passed legislation aimed at regulating AI in the 2024 legislative session through March 17, with at least half a dozen of these actions tied to health care. Six states, Puerto Rico and the Virgin Islands adopted resolutions or enacted new laws.

That’s on top of 18 states and Puerto Rico’s adoption of resolutions or legislation tied to AI in 2023, according to data from the NCSL. Many states are modeling regulations to include guidance from the National Association of Insurance Commissioners (NAIC) issued in December 2023.

The Colorado Division of Insurance, for example, is mulling how to apply new rules adopted by the state legislature in 2021, which are designed to be a check for consumers on AI-generated decisions. It was the first state to target AI use in insurance, according to Bloomberg.

Colorado’s insurance commissioners have so far issued guidance for auto and life insurers under the statute. In recent months, commissioners held hearings and called for written comments to help form its approach to applying the new rules to health insurers, according to materials on the agency’s website.

Colorado’s legislation seeks to hold “insurers accountable for testing their big data systems – including external consumer data and information sources, algorithms, and predictive models — to ensure they are not unfairly discriminating against consumers on the basis of a protected class.”  In Colorado, protected class includes race, color, religion, national origin/ancestry, sex, pregnancy, disability, sexual orientation including transgender status, age, marital status and familial status, according to the state’s Civil Rights Division.

There isn’t yet a firm timeline for finalizing these rules for health insurance because the agency is still early in the process as it also works on life insurance, Vincent Plymell, the assistant commissioner for communications and outreach at the Colorado Division of Insurance, told The Journalist’s Resource.

In California, one bill sponsored by the California Medical Association would “require algorithms, artificial intelligence, and other software tools used for utilization review or utilization management decisions” be “fairly and equitably applied.” Earlier language that would have mandated a licensed physician supervise AI use for decisions to “approve, modify, or deny requests by providers” was struck from the bill.

In Georgia, a bill would require coverage decisions using AI be “meaningfully reviewed” by someone with authority to override them. IllinoisNew York,  Pennsylvania, and Oklahoma are also among states that introduced legislation tied to health care, AI and insurance.

Several states including Maryland, New York, Vermont and Washington state have issued guidance bulletins for insurers modeled after language crafted by the NAIC. The model bulletin, issued in December 2023, aims to set “clear expectations” for insurers when it comes to AI. The bulletin also has standard definitions for AI-related terms, like machine learning and large language models.

A group of NAIC members is also developing  a survey of health insurers on the issue.      

One concern insurers have is that rules may be different across states, Avi Gesser, a data security partner at the law firm Debevoise & Plimpton LLP, told Bloomberg Law.

“It would be a problem for some insurers if they had to do different testing for their algorithm state-by-state,” Gesser said in a November 2023 article. “Some insurers may say, ‘Well, maybe it’s not worth it—maybe we won’t use external data, or maybe we won’t use AI.’”

It’s useful for journalists to read published research to learn more about how artificial intelligence, insurers and health experts are approaching the issue technically, politically and legally. To help, we’ve curated and summarized several studies and scholarly articles on the topic.      

Research Roundup:    

Responsible Artificial Intelligence in Healthcare: Predicting and Preventing Insurance Claim Denials for Economic and Social Wellbeing
Marina Johnson, Abdullah Albizri and Antoine Harfouche. Information Systems Frontiers, April 2021.

The study: The authors examine AI models to help hospitals identify and prevent denials of patient insurance claims, aiming to cut the costs of appeals and reduce patient emotional distress. They examine six different kinds of algorithms to recommend the best model for predicting claim rejections and test it in a hospital. The authors use “white box” and “glass box” models, which reveal more data and mechanisms in an AI program than “black box” models, to develop what they label a Responsible Artificial Intelligence recommendation for an AI product to solve this problem.

In developing the proposed solution, the authors take into account five principles: transparency, justice, a no-harm approach, accountability and privacy.

To develop their proposal, the researchers used a dataset of 57,458 claims from a single hospital submitted to various insurance companies. They caution that their experiment involved using data from a single hospital.

The findings      
The solution the authors propose seeks to identify, in part, errors in coding and billing, medical needs, and mismatched codes for services and procedures to a patient’s diagnosis. Once flagged by the system the error can be fixed before submitting to an insurance company. That may spare the insured patient from going through the appeals process. The technical solution proposed by the authors “delivers a high accuracy rate” at about 83%, they write.

They recommend future research use data from insurance companies in which “many providers submit claims, providing more generalizable results.”

The authors write: “Insured patients suffering from a medical condition are overburdened if they have to deal with an appeal process for a denied claim. An AI solution similar to the one proposed in this study can prevent patients from dealing with the appeal process.”

Fair Regression for Health Care Spending
Anna Zink and Sherri Rose. Biometrics, September 2020.

The study: In this study, the authors examine and suggest alternative methods to predict spending in health insurance markets so insurers can provide fair benefits for enrollees in a plan, while more accurately gauging their financial risk. The authors examine “undercompensated” groups, people who are often underpaid by health insurance formulas, including people with mental illness or substance abuse disorders. They then suggest new tools and formulas for including these groups in regression analysis used to calculate fair benefits for enrollees. Regression analysis is a way of parsing variables in data to glean the most important factors in determining risk, what the impact is and how robust those factors are in calculations used to predict fair benefits and coverage.   

The findings: In their analysis, the authors use a random sample of 100,000 enrollees from the IBM MarketScan Research database in 2015 to predict total annual expenditures for 2016. Almost 14% of the sample were coded with a diagnosis of mental health and substance abuse disorder. When insurance companies “underpredict” spending for groups like these, “there is evidence that insurers adjust the prescription drugs, services, and providers they cover” and alter a plans’ benefit design “to make health plans less attractive for enrollees in undercompensated groups.”

The authors propose technical changes to formulas used to calculate these risks to produce what they find are more inclusive results for underrepresented groups, in this case those categorized as having mental health and substance use disorders. One of their suggested changes meant a 98% reduction in risk that insurers would be undercompensated, likely leading to an improvement in coverage for that group. It only increased insurer risk tied to predicting cost for enrollees without mental health and substance use disorders by about 4%, or 0.5 percentage points. The results could lead to “massive improvements in group fairness.”

The authors write: “For many estimators, particularly in our data analysis, improvements in fairness were larger than the subsequent decreases in overall fit. This suggests that if we allow for a slight drop in overall fit, we could greatly increase compensation for mental health and substance use disorders. Policymakers need to consider whether they are willing to sacrifice small reductions in global fit for large improvements in fairness.”

Additional reading: The authors outline this and two other studies tied to the topic in a November 2022 policy brief for the Stanford University Human Centered Artificial Intelligence group.

The Imperative for Regulatory Oversight of Large Language Models (or Generative AI) in Healthcare
Bertalan Meskó and Eric J. Topol. NPJ Digital Medicine, July 2023.

The article: In this article, the authors argue a new regulatory category should be created specifically for large language models in health care because they are different from previous artificial intelligence mechanisms in scale, capabilities and impact. LLMs can also adapt their responses in real-time, they note. The authors outline categories regulators could create to harness — and help control — LLMs.

By creating specific prescriptions for managing LLMs, regulators can help gain the trust of patients, physicians and administrators, they argue.

The findings: The authors write that safeguards should include ensuring:
• Patient data used for training LLMs are “fully anonymized and protected” from breaches, a “significant regulatory challenge” because violations could run afoul of privacy laws like the Health Insurance Portability and Accountability Act (HIPAA.)
• Interpretability and transparency for AI-made decisions, a “particularly challenging” task for “black box” models that use hidden and complex algorithms.
• Fairness and safeguards against biases. Biases can find their way into LLMs like Chat GPT-4 during model training that uses patient data, leading to “disparities in healthcare outcomes.”
• Establishing data ownership, something that’s hard to define and regulate.
• Users don’t become over-reliant on AI models, as some AI models can “hallucinate” and yield errors.

The authors write: “LLMs offer tremendous promise for the future of healthcare, but their use also entails risks and ethical challenges. By taking a proactive approach to regulation, it is possible to harness the potential of AI-driven technologies like LLMs while minimizing potential harm and preserving the trust of patients and healthcare providers alike.”

Denial—Artificial Intelligence Tools and Health Insurance Coverage Decisions
Michelle M. Mello and Sherri Rose, JAMA Health Forum, March 7, 2024

The article: In this Forum article, the authors, both professors of health policy,   call for national policy guardrails for AI and algorithmic use by health insurers. They note investigative journalism helped bring incidents to light in cases tied to Medicare Advantage as well as congressional hearings and class-action lawsuits against major health insurance companies.

The authors highlight and describe class-action suits against UnitedHealthcare and Humana that allege the companies pressured managers to discharge patients prematurely based on results from an AI algorithm. They also note Cigna, another insurance firm, is facing a class action suit alleging it used another kind of algorithm to deny claims at an average of 1.2 seconds each.
Algorithms can now be trained “at an unprecedented scale” using datasets such as Epic’s Cosmos, which represents some 238 million patients, the authors note.  But even developers may not know the mechanics behind — or why — an AI algorithm makes a recommendation.

The authors write: “The increased transparency that the CMS, journalists, and litigators have driven about how insurers use algorithms may help improve practices and attenuate biases. Transparency should also inspire recognition that although some uses of algorithms by insurers may be ethically unacceptable, others might be ethically obligatory. For example, eliminating the use of (imperfect) algorithms in health plan payment risk adjustment would undermine equity because adjusting payments for health status diminishes insurers’ incentive to avoid sicker enrollees. As the national conversation about algorithmic governance and health intensifies, insurance-related issues and concerns should remain in the foreground.”

Additional resources for journalists

• This research review and tip sheet from The Journalist’s Resource offers a primer, definitions and foundational research on racial bias in AI in health care

• The Association of Health Care Journalists  on its website features a guide to how health insurance works in each state. Created by Georgetown University’s Center on Health Insurance Reforms and supported by the Robert Wood Johnson Foundation, the guide provides useful statistics and resources that give journalists an overview of how health insurance works in each state. That includes a breakdown of different kinds of insurance that serve the population in each place, including how many people are covered by Medicare, Medicaid and employer-backed insurance. The guide can help inform journalist’s questions about the health insurance landscape locally, says      Joe Burns, the beat leader for health policy (including insurance) at AHCJ.

• The National Association of Insurance Commissioners has a map of which states adopt its model bulletin as well as a page documenting the work of its Big Data and Artificial Intelligence working group.

• Congress.gov features a clickable map of state legislature websites.

•The National Conference of State Legislatures tracks bills on AI for legislative sessions in each state. This list is current as of March 2024.

• The National Center for State Courts links to the websites of state-level courts.

• Here’s the Office of the National Coordinator for Health Information Technology’s final rule and fact sheets under the 21st Century Cures Act.

Here’s a video and transcripts of testimony from a Feb. 8 U.S. Senate Finance hearing titled “Artificial Intelligence and Health Care: Promises and Pitfalls.”    

  • KFF, formerly called the Kaiser Family Foundation, maintains a map showing what percentage of each state’s population is covered by health insurance.

About The Author