Algorithms are Real Threats Already: Three Cases of AI's Devastating Impact

Forget the sci-fi scenarios of rogue AI and killer robots. The true threat of artificial intelligence isn’t some distant future; it’s the reality of today. AI systems are actively making critical decisions impacting healthcare, housing, and fundamental human rights, frequently with dire consequences.

This article will explore three real-world instances where AI systems have negatively impacted countless lives. Disturbingly, evidence suggests that the harm was (allegedly) inflicted with the awareness and intention of those deploying these systems. Lawsuits are underway in two of these cases, while the third resulted in an entire government’s resignation.

If you’re concerned about a hypothetical future dominated by AI gone wild, it’s time to recognize that this future is already here.

Case #1: UnitedHealth, Humana, and the nH Predict System

The focus here is on the algorithm behind the high claim denial rates of these companies. That algorithm is called nH Predict.

Inside the algorithmic assembly line :bar_chart:

In November 2023, STAT, a health and medicine journalism outlet, published a report exposing nH Predict as an “algorithmic assembly line” for elderly patients. NaviHealth, which UnitedHealth acquired for $2.5 billion in 2020, designed the system, drawing inspiration from Toyota’s car manufacturing principles. The premise was that the principles of mass-producing vehicles could be effectively applied to patient care.

The algorithm analyzes patient data using factors like:

  • Diagnosis and age.

  • Physical function scores.

  • Living situation.

  • A database of six million previous patients.

Managers set strict targets, mandating patient stays to be within 1% of the algorithm’s predictions. Medical professionals faced repercussions for questioning these decisions.

The consequences?

A recent 54-page U.S. Senate report revealed that UnitedHealth’s post-acute services denial rate increased from 8.7% to 22.7% between 2019 and 2022. Simultaneously, their skilled nursing home denial rate increased ninefold. This profits-over-people model has had devastating effects.

Real-world impact :comet:

The report paints a grim picture of algorithmic cruelty. Consider the elderly woman found by her grandson after suffering a stroke. The algorithm granted her only 20 days of rehab, less than half the average time for severely impaired stroke patients. Another example is the 78-year-old legally blind man with failing heart and kidneys who was granted only 16 days of care after falling in the nursing home.

One patient nearing discharge after knee surgery was expected to learn how to “butt bump” up and down stairs because the algorithm deemed his time was up. Case managers who advocated for patients risked losing their jobs.

The lawsuits unfold :balance_scale:

In November 2023, a class action lawsuit was filed in Minnesota against UnitedHealth Group, UnitedHealthcare, and NaviHealth on behalf of Gene B. Lokken and Dale Henry Tetzloff.

Humana also adopted nH Predict, implementing the same system. This resulted in a separate class action lawsuit filed against them in December 2023.

The lawsuits allege that these insurance companies knowingly deployed an AI system with a high error rate, banking on the fact that few patients would appeal denials. As of January 2025, neither company has confirmed whether they’ve stopped using nH Predict.

Humana stated they use AI tools but emphasized that humans are always involved in decision-making.

Case #2: Implementing Human Oversight in AI Systems

One of the most effective ways to address the issues raised by the AI systems in healthcare, housing, and governance is to implement a robust framework for human oversight. This framework should include the following:

Step 1: Establish Independent Review Boards

  • Create boards consisting of ethicists, domain experts, and community representatives.

Step 2: Conduct Regular Audits

  • Perform regular audits of AI algorithms, datasets, and decision-making processes.

Step 3: Develop Override Mechanisms

  • Implement clear and accessible mechanisms for human experts to override AI decisions.

Step 4: Ensure Transparency

  • Maintain transparency in AI systems, including their goals, data sources, algorithms, and decision-making logic.

Step 5: Provide Training and Education

  • Offer comprehensive training and education to personnel involved in AI systems.

Case #3: RealPage YieldStar Software

AI algorithms are increasingly used to set apartment rent prices.

YieldStar allegedly enables property managers to coordinate rental rates through shared data. Critics argue that the software provides a mechanism for algorithmic price coordination that has significantly impacted housing affordability.

How the YieldStar algorithm works :bar_chart:

RealPage’s YieldStar software gathers nonpublic data from property managers about their rental transactions, including effective rents, occupancy rates, and lease terms.

In 2023, the system aggregated data from millions of units. Each night, the algorithm analyzes the pooled competitor data to generate new price recommendations.

The software determines these recommendations through:

  • Calculating price elasticity using competitor data.

  • Establishing rent boundaries based on competitor pricing.

  • Suggesting keeping units vacant to maintain higher prices.

  • Discouraging individual price negotiations.

RealPage claims this helps landlords react to market conditions. However, the system enables property managers to coordinate pricing and inflate prices beyond what they would do otherwise.

Real-world impact :comet:

A 2022 investigation found that buildings using RealPage’s software showed higher rent increases.

One RealPage-managed property’s rent for a one-bedroom apartment jumped 33% in a single year, while a nearby building not using the software raised rent only 3.9%.

RealPage claims its software helps landlords “outperform the market by 3 to 7 percent.” One company learned it could make more profit by operating at a lower occupancy rate with higher rents.

This systematic push for higher rents led to record-breaking levels of homelessness.

The previous administration published a post about the RealPage issue and its harmful effects.

The lawsuits unfold :balance_scale:

The U.S. Department of Justice (DOJ) began scrutinizing RealPage’s practices in late 2022. Multiple lawsuits alleging antitrust violations were also filed.

RealPage stated that the DOJ closed its criminal investigation, but this has not been confirmed. The company still faces civil litigation, including:

  • A federal class-action suit in Tennessee.

  • State-level lawsuits.

  • The DOJ’s civil antitrust lawsuit.

RealPage has launched a dedicated website to present their side. They argue their software benefits renters and that acceptance rates of their pricing recommendations are lower than alleged.

The DOJ dismissed this.

Biden’s Deputy Attorney General Lisa Monaco stated, “Training a machine to break the law is still breaking the law.”

Case #4: The Dutch Tax Authority Algorithm

In 2013, the Dutch tax authority implemented an algorithm to create risk profiles for detecting fraud in childcare benefit applications.

The algorithm backstory :bar_chart:

The algorithm was developed in response to a €4 million benefits scam.

The algorithm:

  • Assigned risk scores based on criteria.

  • Included a discriminatory “nationality flag” targeting people with foreign citizenship.

  • Applied the Pareto principle, assuming 80% of investigated cases were fraudulent.

  • Integrated with a secret blacklist system.

The system created a discriminatory feedback loop, associating minority status with fraud risk.

Real-world impact :comet:

The algorithm’s decision-making devastated lives across the Netherlands. Chermaine Leysner was falsely accused of fraud and demanded to repay benefits, leading to financial hardship and depression.

Thousands of families were wrongly accused of fraud, with an overrepresentation of errors among minority communities. Thousands of children were placed in foster care, and victims committed suicide.

Legal and regulatory aftermath :balance_scale:

In 2020, investigations revealed bias in the system. The Dutch Data Protection Authority fined the tax administration €2.75 million for unlawful discrimination. In January 2021, the entire Dutch government resigned.

The government has:

  • Promised €30,000 compensation per affected family.

  • Created a new algorithm oversight authority.

  • Admitted to institutional racism.

  • Implemented new safeguards for AI systems.

A 2024 report found that authorities continued to employ discriminatory algorithms throughout 2023.

What are we to make of all this? :thinking:

There’s a lot of focus on the future threat of AI, from job displacement to the destruction of humanity.

AI systems are already causing damage at scale, from healthcare denials to inflated rental prices.

We need to make better decisions within our organizations and re-think our relationship with technology. If you’re considering adopting a new AI tool, ask yourself how it will impact other humans and whether you’d want to be on the receiving end.


It’s time to shift our attention to the present and address the harm that AI systems are already causing by focusing on ethical considerations and transparency in AI development and deployment.