In a previous post, I wrote about how insurance carriers are using AI to value your clients’ personal injury claims. Knowing the threat exists is one thing. Having the tools to fight back is another. So this is the tactical follow-up. These are five specific discovery requests you should be including in every case where you suspect an algorithm played a role in how the carrier valued your client’s claim. I’ve been discussing these approaches on Trial Lawyer View, and the feedback from lawyers who have used them has been encouraging.
A federal court in Minnesota recently validated this entire approach. In The Estate of Gene B. Lokken v. UnitedHealth Group, Inc., No. 23-CV-3514 (D. Minn.), the court granted a motion to compel discovery into an insurer’s use of an AI program to evaluate claims. The court found the plaintiffs were entitled to documents showing how the program works, its development goals, and whether the AI was designed to replace physician decision-making. That ruling changes the calculus for every PI lawyer in the country.
Let me walk you through the five requests and why each one matters.
1. The Algorithm Itself: Request the Claims Valuation Software and Its Logic
Your first request should target the software or AI tool the carrier used to evaluate your client’s claim. Request all documents, manuals, training materials, and technical specifications related to any software, algorithm, or artificial intelligence system used to evaluate, value, or make recommendations on bodily injury claims, including Colossus, ClaimIQ, Guidewire, or any proprietary system.
Why this matters: Over 70% of major carriers use Colossus or similar claim valuation software. These programs convert your client’s medical records into numerical “severity points” and spit out a settlement range. The adjuster’s hands are often tied to whatever number the algorithm produces. You need to know what system was used, how the system assigns value, and what rules govern the output.
Carriers will resist this request. They will claim the software is proprietary and constitutes a trade secret. Push back hard. The Lokken court rejected similar objections and ordered production of documents related to AI development goals and function. You are not asking for their source code. You are asking how decisions about your client’s case were made. That is squarely within the scope of discovery.
2. The Inputs: Request All Data Entered Into the System for Your Client’s Claim
Request production of all data, codes, classifications, severity ratings, value drivers, and inputs entered into any claims valuation software in connection with the evaluation of claimant’s bodily injury claim, including all screen captures, printouts, reports, and output generated by the system.
Why this matters: The output of these systems is only as good as what goes in. Adjusters enter ICD-10 diagnostic codes, treatment types, and duration data. They also enter subjective assessments about things like “duties under duress,” which is Colossus terminology for how your client’s daily life has been affected. If the adjuster fails to input a symptom or undervalues a diagnosis, the algorithm produces a lower number. That lower number becomes the carrier’s settlement authority.
This request exposes whether the adjuster accurately represented your client’s injuries to the system. If the adjuster left out key information, you now have evidence of bad faith.
3. The Adjuster’s Authority: Request Documents Showing the Relationship Between AI Output and Settlement Authority
Request all documents, policies, procedures, memoranda, and training materials that describe the relationship between the output of any claims valuation software and the settlement authority granted to the adjuster handling claimant’s claim, including any policies regarding whether and to what extent the adjuster is permitted to deviate from the software’s recommended range.
Why this matters: The insurance industry tells anyone who will listen that Colossus and similar tools are advisory. They say the software output is a starting point, and adjusters have discretion to go higher when the facts warrant. In practice, that is rarely true. A former Farmers Insurance employee who became a consultant for plaintiffs’ lawyers has estimated that carriers save 15% to 30% on injury claim payouts by using these systems. Those savings only happen when adjusters follow the algorithm.
If you obtain internal policies showing that the adjuster had little or no authority to exceed the software’s range, you have a strong bad faith argument. You are proving that the carrier did not individually evaluate your client’s claim on its merits. Instead, the carrier delegated that evaluation to a machine and locked the adjuster into whatever the machine produced.
4. The Calibration Data: Request How the Carrier Tuned the Algorithm’s Settlement Values
Request all documents related to the calibration, configuration, updating, or modification of any claims valuation software used by carrier, including all decisions to adjust severity point values, dollar-per-point multipliers, or settlement ranges for the jurisdiction and time period applicable to claimant’s claim.
Why this matters: These systems are not static. Carriers periodically adjust the dollar values assigned to each severity point. They calibrate based on local settlement data, jury verdicts, and their own loss experience.
Here is the concern. If a carrier calibrates the algorithm to reflect below-market values, every claim processed through that system gets undervalued. This is not an accident. This is a business decision to systematically suppress claim values across an entire book of business. Your discovery request forces the carrier to show you the numbers behind the numbers. If the calibration data shows that the carrier set its multipliers below the range of recent jury verdicts in your jurisdiction, you have evidence that the system was designed to produce lowball results.
5. The Oversight Record: Request All Policies and Audits Governing AI Use in Claims
Request all documents related to carrier’s policies, procedures, audits, or oversight of the use of artificial intelligence or claims valuation software in the evaluation of bodily injury claims, including any internal or government investigations into the accuracy, fairness, or bias of such systems, and any employee training materials related to the use of AI in the claims process.
Why this matters: The Lokken court specifically allowed discovery into both the insurer’s oversight of AI and government investigations into the insurer’s use of AI. This is important because insurers have a duty to fairly evaluate every claim on its individual merits. If a carrier adopted AI and failed to audit the system for accuracy or bias, that failure is evidence that the carrier did not act in good faith.
There is also a growing body of regulatory interest in this area. Several state insurance departments have begun examining whether AI-driven claims processes comply with consumer protection laws. If the carrier has been the subject of a regulatory inquiry, you want those documents. They tell you what the regulator was concerned about, and they give you a roadmap for your own bad faith case.
Putting These Requests to Work
Start including these five categories in your standard discovery template for every PI case against a major carrier. Do not wait until you suspect AI involvement. Assume the algorithm is there. More than 70% of major carriers use some form of claims valuation software. The question is not whether a computer played a role. The question is how much of a role the computer played.
When the carrier objects, and they will, cite the Lokken decision. Point to the court’s finding that plaintiffs are entitled to know how the AI works, what its development goals were, and whether the system was designed to replace human decision-making. Frame your argument around the carrier’s obligation to evaluate each claim on its individual merits. If the carrier outsourced that obligation to a machine, you have a right to know.
Deposition strategy matters here too. When you depose the adjuster, ask whether they used any software to evaluate the claim. Ask what data they entered. Ask whether they had authority to exceed the software’s range. Ask whether they did exceed the range. These questions build the record you need to make your bad faith case.
The Bigger Picture for Your Practice
This is not about being anti-technology. AI is going to play an increasing role in claims handling, and that is not going to change. The issue is transparency and accountability. When a carrier uses a machine to value your client’s pain and suffering, your client has a right to know. And you, as their lawyer, have an obligation to find out.
I have spent over two decades working on the resolution side of catastrophic personal injury cases. I have seen how carriers evaluate claims from the inside. What I know is this: the carriers who are investing in AI are not doing so to be more fair. They are doing so to be more profitable.
The carriers are not going to stop using AI. But they should expect that you are going to start asking questions about how they use it.
Why Synergy is the Answer to Help You Scale
Synergy exists to help firms confront the operational realities being driven by technology and scaling pressure. By removing administrative burdens related to lien identification, verification and resolution, from your staff, we help you strengthen your practice’s capacity for high-value legal work and sustainable growth.
🔗 Want more insights like this?
If you’re a personal injury lawyer ready to scale, streamline, and step into your role as CEO, let’s talk. Join the Peak Practice Community, and learn how Synergy can help you eliminate settlement bottlenecks, resolve complex liens, and maximize recoveries. Learn more here: https://partnerwithsynergy.com/peak-practice/
If you want to grow and scale your law firm more effectively, consider partnering with Synergy for lien resolution. Learn more at: https://partnerwithsynergy.com/liens/