AI platform agreements have become the most commercially and legally complex contracts in enterprise technology. Usage-based pricing creates unpredictable cost exposure. IP ownership clauses determine who benefits from the models your data trains. Data governance provisions can expose you to regulatory liability. Model performance SLAs are frequently absent or unenforceable. Most buyers sign these contracts without independent review.
A Fortune 500 global insurance group engaged us when their legal and procurement teams, reviewing an AI platform agreement from a major US-based AI vendor, identified contract language that would have granted the vendor broad rights to use the insurer's proprietary claims data — including customer data — for model training across other clients. The legal team had flagged it; the procurement team had no framework for evaluating or renegotiating it. The CIO called us.
Over thirteen weeks, we did four things: we quantified the commercial exposure in the vendor's pricing model, which used an uncapped token-based consumption structure that we projected would reach $8.4M annually at the insurer's intended usage levels — $3.2M above the "annual commitment" headline price; we negotiated the data use provisions from a broad general licence to a narrowly scoped agreement that prohibited third-party model training on the insurer's data; we secured enforceable model performance SLAs with financial remedies for underperformance; and we restructured the pricing to a capped consumption model with volume discounts. Total outcome: $7.3M in savings over three years, IP rights protected, and a contract the insurer's legal team described as the most rigorously buyer-protective AI agreement they had seen.
This was not a standard commercial negotiation — it was a clause-by-clause contract restructure. Every provision below was present in the vendor's initial agreement. Every provision below was changed.
Initial clause granted the vendor a perpetual, irrevocable licence to use all data processed through the platform — including the insurer's proprietary claims data — for model training and product improvement across the vendor's entire customer base.
Renegotiated to a narrow licence permitting data use solely for service delivery to this client, with explicit prohibition on third-party model training and a data deletion obligation upon contract termination.
The initial pricing structure was based on token consumption with no cap — creating open-ended financial exposure as AI usage scales. At the insurer's projected usage volumes, the annual cost would have reached $8.4M versus the $5.2M headline commitment.
Restructured to an annual committed spend with a defined consumption cap. Usage above the cap triggers a renegotiation right rather than automatic additional billing. Volume discounts apply as consumption increases across defined tiers.
The initial agreement contained no performance standards for the AI models — no accuracy benchmarks, no hallucination rate caps, no response latency commitments. The insurer was buying AI output with no contractual standard for what "working" meant.
Secured quantified performance benchmarks for accuracy, latency, and availability, with a tiered credit mechanism for underperformance and a termination right if performance remains below threshold for 30 consecutive days.
The vendor retained unlimited rights to modify, replace, or retire the AI models the insurer's workflows depended on, with 30 days' notice only. A model change could disrupt regulated insurance processes with no contractual protection.
Secured a 180-day advance notice requirement for material model changes affecting designated critical workflows, with a parallel-running period and the right to freeze the previous model version during the transition window.
AI vendors use consumption-based pricing models that create structurally unpredictable cost exposure. Our approach is to model actual usage patterns, project consumption under realistic scenarios, identify the gap between headline commitment pricing and projected actual cost, and use that gap as the basis for a restructured pricing negotiation. In this engagement, the gap was $3.2M annually — the most powerful commercial leverage point we had. Vendors will accept capped consumption models and volume discounts when the buyer can demonstrate the uncapped structure creates unacceptable financial exposure.
AI platform IP and data provisions are the most legally consequential elements of any AI contract — and the most frequently accepted without negotiation. The insurer's initial agreement would have granted the vendor training rights over proprietary claims data worth hundreds of millions of dollars in competitive intelligence. Data use restrictions, training data exclusions, and deletion obligations are negotiable. AI vendors accept these restrictions when the buyer has the scale and the legal clarity to demand them — and when they're represented by advisors who understand both the technical and legal dimensions.
Enterprise AI contracts without performance SLAs are a form of commercial malpractice. You are buying an output — the output must be defined, measured, and remedied when it fails to meet the defined standard. In regulated industries including insurance, healthcare, and financial services, unenforceable AI performance standards create regulatory exposure as well as commercial risk. We negotiate performance SLAs with specific metrics, measurement methodologies, and financial remedies as standard provisions in every AI engagement.
AI platforms create operational dependencies that can make exit extremely difficult — model dependency, proprietary fine-tuning, integration depth. We negotiate data portability rights, export format standards, and transition assistance obligations as standard AI contract provisions. In this engagement, we also negotiated a right to retain a copy of all fine-tuned model weights developed using the insurer's data — a provision that preserves the insurer's AI investment if the vendor relationship ends.
"Our legal team had flagged the data clause, but we didn't know what to do with it. The level of commercial and technical sophistication The Negotiation Experts brought to this contract — the pricing model analysis, the SLA framework, the model stability provisions — was unlike anything we had seen in a contract advisory engagement. The agreement we signed was unrecognisable from the one we started with."Chief Information Officer — Fortune 500 Global Insurance Group
Traditional software contracts deal with defined functionality. AI contracts deal with probabilistic outputs, data that trains on data, and models that change. Standard enterprise procurement frameworks are not designed for this. An AI-specific contract review — by advisors who understand both the technology and the commercial model — is not optional for enterprise buyers.
Every AI platform vendor will tell you that consumption-based pricing aligns incentives. What they don't say is that usage grows faster than any projection — and that the financial exposure from uncapped consumption can dwarf the headline committed spend. Model the realistic usage curve before you sign.
The insurance company's claims data — decades of actuarial intelligence — is extraordinarily valuable as AI training data. The initial contract would have given the vendor access to that data at no cost, while billing the insurer for the privilege. Understanding the data value dynamic is essential to AI contract negotiation.
Our guide to the fifteen most dangerous clauses in AI platform agreements — IP, data rights, performance SLAs, liability, and exit provisions. Essential reading before signing any AI contract.
AI contracts require a different lens than traditional software agreements. We identify the commercial, legal, and technical risks — and negotiate them out — before you sign.
Tell us about the AI platform you're evaluating or renewing and we will provide an initial read on the contract risks and commercial opportunity.