Partner With Synergy – Free Your Firm To Focus On What It Does Best™

AI Is Coming for Your Client’s Recovery. Here Is How to Fight Back!

Insurance carriers are spending billions on artificial intelligence. The stated goal is faster claims processing and fraud detection. The practical result is something different. AI is being used to systematically undervalue personal injury claims and pressure injured people into accepting less than they are owed.  If you handle PI cases and you are not paying attention to this, you are already behind.

The Problem Is Not New. The Scale Is.

For decades, insurance companies have used software tools to assign values to bodily injury claims. The most well-known is Colossus, a program that converts medical data into severity points and spits out a settlement range. Over 70 percent of insurers in the United States use Colossus or similar software to assess bodily injury claims. The manufacturer’s own sales literature once promoted that the program would reduce bodily injury claims payouts by up to 20 percent.  That was the old playbook. The new one is worse.

Today, carriers are deploying machine learning models that go far beyond Colossus. These systems analyze photos of vehicle damage through mobile apps, cross-reference police reports with medical records, and generate settlement offers before a human adjuster reviews the file. A 2025 McKinsey report found that insurers using AI-driven systems have reduced total claims processing time by 70 percent. Faster processing, though, does not mean fairer outcomes.

AI models are trained on historical settlement data. If an insurer paid lower settlements to certain demographics or geographic areas in the past, those patterns get baked into the algorithm. The system reproduces the old biases while appearing objective on the surface. And because no one outside the carrier knows how these algorithms work, claimants and their lawyers are left in the dark about why a particular number was generated.

How Carriers Use AI Against Your Clients

Here is what is happening on the ground. Insurance AI systems flag claims for denial or reduced payment based on narrow criteria that do not account for individual circumstances. The software assigns weights to specific words in medical records. Terms like “conservative care,” “delayed complaint,” or “pre-existing condition” trigger automatic deductions, even when those terms are medically appropriate and say nothing about the legitimacy of the claim.

The systems fail most in the area that matters most to your clients: noneconomic damages. Pain, suffering, emotional distress, loss of enjoyment of life. These do not translate easily into data points. The algorithm does not know your client. It does not understand how their injuries have changed their ability to work, care for their family, or live their daily life. It reduces a deeply personal experience to a statistical average.

The carriers know that most claimants will not fight an AI-generated lowball offer. They count on it. When the system is designed to produce low offers at scale, and almost nobody pushes back, the math is simple: the insurer profits.

What Trial Lawyers Need to Do Right Now

You do not need to become an AI engineer to fight back. You need to change how you build cases and how you present evidence, because the other side already has.

Get the documentation right from day one: AI systems rely on specific diagnostic codes and treatment descriptions. If your client’s pain, functional limitations, and daily impacts are not clearly documented in their medical records with proper terminology, the algorithm treats it as if the injury does not exist. Work with your client’s treating physicians to make sure the records reflect the full picture, not shorthand that a computer will discount.

Demand transparency in discovery: Attorneys across the country are beginning to request algorithmic transparency during the discovery process. Ask for the specific AI tools used to evaluate the claim. Request the data inputs, the weights assigned, and the output. Ask whether a human adjuster reviewed the file before the offer was generated, or whether the number came straight from software. The more lawyers who ask, the more precedent we build.

Hire forensic experts when warranted: In complex cases, consider consulting experts who specialize in evaluating AI-driven claim assessments. They review how the system arrived at a number and identify where the algorithm failed to account for your client’s specific circumstances. This is similar to hiring an accident reconstructionist or an economist. It is one more expert in your toolkit.

Prepare for trial: Colossus and similar systems factor in whether the plaintiff’s attorney has a track record of going to trial. If the algorithm determines the attorney on the file is unlikely to litigate, it generates a lower range. One of the most effective things you do for your client is to make clear, through your actions and your track record, that you are prepared to put the case in front of a jury. Juries do not care about software-generated valuations. They care about real human suffering.

Building Internal AI Policies for Your Firm

While you fight AI on the carrier side, you should also think about how your own firm uses AI. Technology has clear benefits for case management, document review, and research. But there are real risks if you adopt AI tools without a framework for responsible use.

Start with a simple question: does the AI tool serve the client’s interest, or does it create shortcuts that compromise quality? If you are using AI to draft demand letters, review medical records, or identify case patterns, make sure there is a human in the loop who  reviews every output. AI tools make errors. They produce confident-sounding answers that are factually wrong. In a profession where mistakes lead to malpractice exposure, that is a serious concern.

Create a written internal policy for AI use in your firm. Spell out which tools are approved, who reviews the output, and how client data is protected. Make sure your team understands that AI is an assistant, not a decision-maker. The ethical obligation to provide competent representation still rests with you.

The Bottom Line

AI in personal injury is not going away. The global market for AI in insurance is projected to grow from roughly $15 billion in 2025 to over $246 billion by 2035. Claims processing is one of the largest use cases fueling that growth.

The carriers will keep investing in tools designed to reduce what they pay. Your job is to make sure those tools do not succeed at the expense of your clients.

That means better documentation. Smarter discovery. Expert challenges to algorithmic valuations. A willingness to try cases. And a clear-eyed understanding of how the other side is using technology against the people you represent.

The trial lawyers who adapt to this reality will get better results for their clients. The ones who ignore it will find themselves accepting settlement offers generated by a machine that was programmed to pay as little as possible.

Why Synergy is the Answer to Help You Scale

Synergy exists to help firms confront the operational realities being driven by technology and scaling pressure. By removing administrative burdens related to lien identification, verification and resolution, from your staff, we help you strengthen your practice’s capacity for high-value legal work and sustainable growth.

🔗 Want more insights like this?

If you’re a personal injury lawyer ready to scale, streamline, and step into your role as CEO, let’s talk. Join the Peak Practice Community, and learn how Synergy can help you eliminate settlement bottlenecks, resolve complex liens, and maximize recoveries.  Learn more here: https://partnerwithsynergy.com/peak-practice/

If you want to grow and scale your law firm more effectively, consider partnering with Synergy for lien resolution.  Learn more at: https://partnerwithsynergy.com/liens/

blog subscription buttonSubscribe