Table of Contents
- Why Enterprise AI Vendor Selection Is Uniquely Difficult
- The Five Dimensions of AI Vendor Evaluation
- AI Vendor Scoring Matrix
- RFP Process for Enterprise AI Vendors
- AI Proof of Concept: Running One That Actually Works
- Vendor-by-Vendor Selection Guide
- AI Procurement Governance: Organizational Structure
- Red Flags That Should Kill an AI Deal
Why Enterprise AI Vendor Selection Is Uniquely Difficult
Enterprise AI vendor selection is fundamentally different from traditional software procurement, and many organizations are learning this the hard way.
In 2024 and 2025, thousands of enterprises ran what they believed were standard software vendor selection processes for AI—18-month evaluation cycles, rigid RFP templates, multi-vendor shootouts with fixed criteria. By the time selection was complete, the market had moved on. Model capabilities had doubled. Pricing models had shifted. Vendors had acquired or been acquired. New players had emerged. The winner chosen 18 months ago was no longer optimal.
This is not exaggeration. The AI vendor landscape changes faster than the pace of enterprise procurement. OpenAI released GPT-4 (March 2023), GPT-4 Turbo (November 2023), GPT-4 with vision (September 2023), and GPT-4o (May 2024) in consecutive quarters, each with meaningfully different performance characteristics and pricing. Google released Gemini, then Gemini 1.5, then made free tier changes. Anthropic released Claude 2, then Claude 3, then Claude 3.5, each with different context windows and pricing. New vendors (Mistral, together.ai, Replicate, Modal) emerged with compelling alternatives.
The traditional enterprise procurement model—long evaluation, fixed scope, sealed bid, winner-takes-all contract—is broken for AI.
Here are the core reasons why AI vendor selection is harder than it looks:
- Model performance is hard to benchmark. Standard benchmarks (MMLU, HumanEval, HellaSwag) don't measure real-world performance on your use cases. Model A scores higher on academic benchmarks but performs worse on your specific document classification task because your data is different. Vendor claims about model performance are often based on uncontrolled conditions or aren't directly comparable across vendors.
- Vendors pivot pricing mid-cycle. You select a vendor based on per-token pricing of $0.01 / 1K tokens. Six months into your contract, they announce pricing changes. Sometimes it's in the contract (most vendors reserve the right to change pricing). Sometimes it catches you off-guard. You're already built in, though.
- The market is moving faster than your evaluation cycle. By the time your RFP is approved, sent, vendors respond, you evaluate, and you negotiate, 9-12 months have passed. Meaningful capability changes have happened. A vendor you ranked third might now be first.
- Contractual terms vary wildly. There is no "standard" AI vendor contract. OpenAI's terms are completely different from Azure OpenAI's, which are completely different from Anthropic's. Data residency options vary. IP ownership defaults vary. Liability caps vary. Training data rights vary. Most vendor standard terms are not enterprise-ready.
- Data residency and security carry material risk. Your selection of an AI vendor is also a selection of where your data lives, who has access to it, and whether it can be used to train future models. This is not a negotiable-at-the-margins feature. It's core to the decision. If you select OpenAI and discover later that your data can't stay in your region, you have a problem.
- IP contamination risk is real. If you train a model on your proprietary data and that model later incorporates that training data into a general-purpose release, you've lost your competitive advantage. Vendors have different approaches to this. Some explicitly prevent it. Some don't address it.
- Switching costs are extremely high. Once you're committed to an AI vendor and have built applications on their API, replatforming is expensive. Models have different interfaces, different behaviors, different accuracy profiles. Your prompts don't transfer cleanly. Your integrations need rewriting. Switching costs can be 2-3x the cost of initial implementation.
Because of these dynamics, the goal of your AI vendor selection process should not be to find "the best vendor" and lock in for 3-5 years. Instead, your goal should be to select a vendor that is strong enough to go live quickly, understand what you actually need through real use, and build flexibility to evolve your vendor strategy as the market evolves.
The Five Dimensions of AI Vendor Evaluation
When evaluating enterprise AI vendors, assess across five distinct dimensions. Each has equal weight in the decision, but each requires different evaluation methods.
1. Technical Capability & Model Performance
This is the most obvious dimension, and the one most vendors will try to dominate the conversation with. Model performance matters, but it's only one piece.
What to evaluate: Can the vendor's models do what you need them to do, at the accuracy level you require, with the latency your use cases demand? Does the vendor have models specialized for your use case (domain-specific models), or are you constrained to general-purpose models?
Key questions to ask vendors:
- What is your model's accuracy on a test dataset that mimics our production data? (Not public benchmarks—your data.)
- How frequently do you release new model versions, and what is your support policy for deprecated models?
- What is the context window (maximum input length) your current model supports, and can it be extended?
- Can you commit to performance SLAs (e.g., 95% of requests answered within 2 seconds)?
- Do you offer fine-tuning? If so, at what cost, and who owns the tuned model?
- What specialized models do you offer (vision, audio, multimodal), and are they available in all regions where we operate?
2. Commercial & Pricing Model
AI vendor pricing is rapidly consolidating around a few models: per-token (OpenAI, Anthropic, Google), per-request (some), flat monthly (few), and usage-based caps. Each has different cost implications at scale.
What to evaluate: What will this actually cost at your expected usage volumes? Can you predict costs? Are there price escalation clauses? Volume discounts? Committed-use discounts?
Key questions to ask vendors:
- What is your pricing model (per-token, per-request, flat-fee, hybrid), and how does it scale with our usage?
- What volume discounts do you offer, and at what volume thresholds do they apply?
- Are there committed-use discounts if we guarantee annual spend?
- What happens if our usage spikes 10x in a month? Are we capped, charged unlimited, or renegotiated?
- How frequently do you change your pricing, and can you commit to pricing stability for 12 months?
- Do you offer volume discounts for different model tiers (e.g., cheaper pricing for older models)?
3. Data Protection & Privacy
This is non-negotiable. Where does your data go? Who can see it? Can it be used to train future models? What is your data residency commitment?
What to evaluate: Does the vendor have controls that match your regulatory requirements (GDPR, HIPAA, SOX, CCPA)? Can they commit to not using your data for model training? Can they guarantee data residency?
Key questions to ask vendors:
- Do you use customer data to train or improve future models, and if so, can you disable this for our account?
- What is your data residency commitment, and can we specify which region our data is processed in?
- Can you commit to deleting our data upon request, and what is the timeline for deletion?
- Do you have sub-processors that handle our data, and can you provide the list?
- What encryption do you use (in transit, at rest)?
- Can you comply with SOC 2 Type II, ISO 27001, HIPAA, or other compliance frameworks we require?
4. Contract Terms & Flexibility
This is where deals either happen or don't. Vendor standard terms are rarely enterprise-ready.
What to evaluate: Can the vendor negotiate the core contract terms that matter to you: liability, indemnification, data ownership, model rollback rights, termination, and SLAs?
Key questions to ask vendors:
- What is your standard liability cap, and can it be negotiated?
- Do you indemnify us if your model generates output that infringes third-party IP?
- If you roll out a new model version, can we stay on the previous version for 12 months?
- Can we terminate the contract if you acquire another vendor, change your data privacy policy, or change pricing?
- Who owns outputs generated by your model when trained on our data?
- What SLAs do you offer (uptime, latency, availability)?
5. Vendor Stability & Roadmap
The AI vendor landscape is volatile. Some vendors will not exist in 3 years. Some will be acquired and product strategy will change. Some will pivot away from the use case you need them for.
What to evaluate: Is the vendor financially stable? What is their product roadmap, and does it align with your needs? Are they likely to exist in 3 years?
Key questions to ask vendors:
- What is your funding status, and what is your runway? Can you stay independent if growth slows?
- What is your 18-month product roadmap, and how does it align with our use cases?
- If you're acquired, what are the terms of the acquisition? Who will own the IP?
- What is your customer concentration? Are 10% of your customers >50% of your revenue? (High concentration = higher risk.)
- How many enterprise customers do you currently have, and what is your enterprise retention rate?
AI Vendor Scoring Matrix
Here is a practical scoring framework you can use to evaluate vendors quantitatively:
| Category | Weight | Criteria | Scoring Method | Max Points |
|---|---|---|---|---|
| Technical Capability | 25% | Model accuracy on your use case; latency; context window; specialized model availability; fine-tuning support | 0-10 scale (PoC results + benchmark comparison) | 25 |
| Commercial Terms | 20% | Pricing predictability; volume discounts; committed-use discounts; price stability commitment; cost at 3-year usage projection | TCO comparison (0-10 scale); lower cost = higher score | 20 |
| Data Protection | 25% | Data residency options; training data opt-out; encryption; compliance certifications; sub-processor transparency | 0-10 scale (gap assessment vs. requirements) | 25 |
| Contract Terms | 20% | Liability cap; indemnification; model rollback rights; termination rights; SLAs; output ownership | 0-10 scale (negotiation feasibility + gap analysis) | 20 |
| Vendor Stability | 10% | Funding/financial stability; product roadmap alignment; customer retention; industry maturity | 0-10 scale (qualitative assessment + financials) | 10 |
| TOTAL SCORE (Example: OpenAI) | 95 / 100 | |||
Example scoring for OpenAI (general-purpose text generation use case): Technical Capability: 24/25 (GPT-4o demonstrates strong performance on benchmarks; context window adequate; fine-tuning available; but latency commitment weak). Commercial Terms: 18/20 (Per-token pricing is predictable; volume discounts available; but price stability commitment is absent). Data Protection: 22/25 (Enterprise tier offers data residency in multiple regions; training data opt-out available; but compliance certifications lag competitors). Contract Terms: 20/20 (Recently improved; liability acceptable; output ownership favorable). Vendor Stability: 10/10 (Strong funding, clear roadmap, industry leader). Total: 94/100.
Use this framework to score all vendors on the same scale. Vendors scoring 80+ are viable. 70-80 requires negotiation or use-case-specific workarounds. Below 70, consider alternatives.
RFP Process for Enterprise AI Vendors
A well-structured RFP is the foundation of competitive vendor selection. Here's what to include:
Required Disclosures Section
Before vendors respond, tell them you need transparency on these points. Most won't volunteer this information.
- Training data sources and filtering: What data was your model trained on? What filtering or removal did you do? Can you provide documentation that your training data does not include material from our competitors?
- Model update frequency: How often do you release new versions? How much notice do you provide? What is your support timeline for deprecated models?
- Sub-processors: Who has access to customer data? List all vendors, data centers, and third parties who can access customer inputs/outputs.
- Data residency options: In which regions can you process data? Can you commit to US-only, EU-only, or Asia-Pacific-only processing?
- Training data usage: Do you use customer inputs to train or improve models? Can you disable this, and if so, what is the cost?
- IP ownership: Who owns model outputs when generated from customer inputs? If we fine-tune, who owns the tuned model?
- Model transparency: Can you explain what your model learned and what influenced its behavior on specific requests?
Technical Evaluation Section
- Proof of Concept requirements: We will run a PoC on your platform for 4 weeks. What support do you provide? Is there a cost? What data can we test with?
- Benchmark methodology: How should we measure accuracy on our use case? Can we use your benchmark suite, or do we design our own?
- SLA commitments: What uptime SLA can you commit to? What is your latency SLA (p50, p95, p99)? What are the penalties for missing SLAs?
- Model versioning: Can we stay on older model versions while you release new ones? For how long do you support previous versions?
Commercial Evaluation Section
- Pricing transparency: Provide a pricing schedule that shows per-token or per-request costs for all model tiers you offer.
- Volume discount structure: At what usage levels do volume discounts begin? What discount does each tier offer?
- Committed-use discounts: If we commit to $X annual spend, what discount do we receive?
- Price stability commitment: Can you commit that pricing will not increase more than 5% annually for 24 months?
- Cost cap: Can you cap our monthly costs? What happens if we exceed the cap?
Reference Requirements
- Provide 3-5 references from enterprise customers in our industry who have similar use cases and scale.
- For each reference, have them discuss: their implementation experience, technical performance, support quality, and lessons learned.
- Ask references specifically about contract negotiation—how flexible were you on terms?
AI Proof of Concept: Running One That Actually Tells You Something
Most enterprise AI vendor PoCs are structured in ways that favor the vendor. The vendor gets to pick the use case, gets clean data, gets a short timeline, gets supervised success metrics. Then you sign a contract based on this artificial scenario, and real implementation is much harder.
Here's how to structure a meaningful PoC:
Use Real Data, Not Clean Data
Ask the vendor to test on your actual data—messy, incomplete, inconsistent data as it exists in your production systems. Not a curated sample. If they'll only demo with clean data, that's a red flag.
Test Edge Cases and Failure Modes
Don't test the happy path. Test the 5% of cases that are hardest. Test requests that are adversarial or designed to break the model. Test requests written in non-standard language, slang, or languages other than English if you support them. If the model fails on these cases, you'll discover it in the PoC, not in production.
Measure on Your Specific Use Cases
Use metrics that matter to your business, not metrics that matter to the vendor. If you're building a customer support chatbot, don't measure accuracy on academic benchmarks. Measure: did the customer's problem get solved? Did the response reduce follow-up volume? Was the response factually correct? Use human raters to score on real business outcomes.
Include Adversarial Prompting Tests
Test the model's robustness to attacks: prompt injection, jailbreaks, requests that try to expose confidential information, requests that try to make the model produce harmful content. The vendor will claim these aren't relevant to enterprise use cases. They're wrong. If you're building a customer-facing service, adversarial users will find your model's weaknesses.
Test Model Update Impact
If the vendor releases a new model version during your PoC, test it. Compare accuracy, latency, and behavior on your use cases to the previous version. Some model updates improve performance. Some degrade it on specific tasks. You need to understand this before committing.
Include Total Cost of Integration
Don't just measure model performance. Measure total cost: API costs + engineering time to integrate + infrastructure + support. Some vendors have cheap per-token costs but require complex integrations. Some have higher per-token costs but are simple to integrate. True cost comparison includes all of this.
Run for 4-6 Weeks Minimum
Short PoCs (1-2 weeks) favor vendors with slick demos. Long PoCs (4-6 weeks) reveal real operational challenges: how often does the API go down? How responsive is support? Does the model degrade under load? How hard is it to customize?
Vendor-by-Vendor Selection Guide
Here's a vendor-specific guide for the major enterprise AI platforms. Use this as a starting point for your evaluation, not as a replacement for hands-on testing.
| Vendor | Best For | Avoid When | Pricing Model | Key Contract Risk |
|---|---|---|---|---|
| OpenAI (GPT-4o, ChatGPT API) | General-purpose text generation, customer-facing conversational AI, content creation, code generation. Strongest at multi-step reasoning and creative tasks. Industry-leading capability on most benchmarks. | Mission-critical applications where model rollback is non-negotiable, highly regulated industries (banking, healthcare) without explicit SOC 2 commitment, specialized domains (scientific compute, legal analysis) where fine-tuning is essential. | Per-token (Input: $0.005-$0.015 per 1K tokens; Output: $0.015-$0.060 per 1K tokens depending on model). Volume discounts available. Committed-use discounts available. | Liability cap is low (6 months of fees) and hard to negotiate. Training data usage policy has evolved; verify current terms. No multi-year pricing guarantees. Model rollback support is limited (typically 30-60 day deprecation periods). |
| Google Gemini (via Google Cloud AI / Vertex AI) | Organizations already heavy on Google Cloud infrastructure. Multimodal tasks (text, image, video, audio in single model). Integration with Google Workspace. Organizations that need local deployment options. | Standalone AI needs without broader Google Cloud commitment, use cases where OpenAI performance is significantly better, organizations avoiding vendor lock-in to cloud provider. | Per-token pricing (varies by model). Can be combined with Google Cloud commitment spend. Lower per-token cost than OpenAI but depends on cloud volume discounts. | Pricing bundled with Google Cloud spend; hard to isolate AI costs. Data residency tied to Cloud region selection (good transparency but less flexible than dedicated options). Contract terms flow through Google Cloud agreement, which may be rigid. |
| Anthropic Claude (3.5 Sonnet, Opus, Haiku) | Safety-critical applications, long-context requirements (200K tokens), specialized tasks requiring careful reasoning, organizations prioritizing interpretability and reduced hallucination, document processing and analysis at scale. | Latency-sensitive use cases (Claude is slower than GPT-4 in many scenarios), image generation, real-time customer service, organizations that can't wait for new model releases (Anthropic releases less frequently than OpenAI). | Per-token pricing ($0.003 input / $0.015 output for Haiku; $0.008 input / $0.024 output for Sonnet; $0.015 input / $0.075 output for Opus). Volume discounts. Batch API for cost reduction on non-urgent requests. | Newer vendor, smaller customer base (higher execution risk). Liability caps similar to OpenAI (6 months fees). Limited enterprise deployment options compared to Azure or AWS. Model availability in regions may lag competition. |
| Microsoft Azure OpenAI | Organizations with existing Microsoft enterprise licensing (Microsoft 365, Azure, Windows), need for dedicated capacity and isolation, organizations requiring US Government Cloud compliance, want OpenAI capability but need Microsoft integration and governance. | Organizations not already on Azure (switching cost is high), need for non-proprietary AI (Azure locks you into OpenAI), startups or smaller organizations (Azure has enterprise-focused pricing and governance overhead). | Per-token pricing similar to OpenAI public pricing, but often bundled with Azure spend. Committed-use discounts. Can be negotiated as part of larger Microsoft enterprise agreements. | Full vendor lock-in to Microsoft ecosystem. OpenAI capability and roadmap controlled by Microsoft. Contract negotiation flows through Microsoft (usually slower and more rigid). Azure service agreements may not match OpenAI public API terms. |
| AWS Bedrock | AWS-committed organizations, need to switch models without code changes (standardized API), organizations requiring AWS control plane governance, multi-model evaluation where vendor independence matters. | Organizations not on AWS (adoption curve is steep), need for absolute latest models (Bedrock has some feature and model lag vs. native vendor APIs), cost sensitivity (Bedrock pricing is typically higher than native APIs). | Per-token pricing for hosted models (varies by model; generally higher than native APIs). On-demand or throughput-based capacity. Can be combined with AWS commitment spend. | Vendor lock-in to AWS. Pricing higher than using native APIs (AWS margin). Throughput-based capacity means you commit to spend. Data residency limited to AWS region options (not always best match for specific regulatory needs). |
Recommendation Matrices by Use Case
For internal analytics and reporting: Anthropic Claude (long context, accurate analysis) or Google Gemini (if on Google Cloud). Azure OpenAI second choice if Microsoft-locked.
For customer-facing chatbots and conversational AI: OpenAI (capability + ecosystem) or Google Gemini (if Google Cloud user). Avoid Claude (latency). AWS Bedrock acceptable if already AWS-committed.
For content generation (copywriting, marketing): OpenAI GPT-4o (creative capability). Google Gemini multimodal close second.
For code generation and technical tasks: OpenAI (industry standard) or Google Gemini. Anthropic Claude acceptable but slower.
For safety-critical or highly regulated applications: Anthropic Claude (interpretability, safety focus) or Azure OpenAI (Microsoft governance model). Avoid pure OpenAI for regulated use cases without additional safeguards.
For organizations committed to avoiding vendor lock-in: Use AWS Bedrock with multiple models (can swap models), or use open-source models via AWS SageMaker or similar. Avoid Azure OpenAI and Google Gemini (both lock you in).
AI Procurement Governance: Organizational Structure
Before you sign your first AI vendor contract, you need organizational governance in place. Without it, you'll wake up in 18 months with shadow AI: multiple vendors, inconsistent data handling, conflicting contracts, and no visibility into what's happening.
Create an AI Procurement Committee
This committee should have representatives from:
- Technology: CTO or VP of Engineering. Evaluates technical fit and architecture implications.
- Security/Compliance: CISO or Chief Privacy Officer. Evaluates data protection, regulatory compliance, and security implications.
- Procurement/Legal: Chief Procurement Officer or General Counsel. Negotiates contracts and manages vendor terms.
- Finance: CFO or VP of Finance. Evaluates TCO, pricing, and budget impact.
- Lines of Business: VP of the business unit that will use the AI vendor. Defines requirements and success metrics.
This committee meets monthly (minimum) to review new vendor requests, approve new vendors, and govern existing vendor relationships. Decisions require consensus from all five groups.
Establish AI Procurement Policies
Before anyone can sign an AI vendor contract, these policies must exist:
Data Use Policy: What data can be sent to which vendors? Is financial data allowed? Customer PII? Competitive information? Design your data classification matrix (public, internal, confidential, restricted) and specify which vendors can handle which classification levels. Default policy: restrict all data to lowest-risk vendors unless approval is granted.
AI Vendor Approval Process: No vendor gets used without going through the AI Procurement Committee. Vendors must complete a questionnaire covering technical, commercial, security, compliance, and contract requirements. Committee scores on the vendor matrix. Vendors scoring below 70 are rejected. Vendors scoring 70-80 require negotiation. Only vendors scoring 80+ are approved. This prevents random team members from signing up for the cheapest AI service without proper evaluation.
Vendor Registry: Maintain a centralized registry of all AI vendors, what they're used for, what data they access, and who the primary contract owner is. Update quarterly. Without this, you have no visibility into shadow AI.
AI Use Policy: Define acceptable use cases. Can teams build customer-facing AI features? Can teams train proprietary models on customer data? Can teams use AI for hiring decisions? Create a framework that approves some uses (general analytics, internal documentation) and requires approval for others (customer-facing, high-stakes, regulatory).
Contract Baseline: Define a baseline contract template that all AI vendors must meet or exceed. Include minimum liability (12 months fees), minimum data protection (encryption, SOC 2), and minimum termination rights (90 days notice, data deletion within 30 days). This prevents legal teams from negotiating different terms with each vendor.
Shadow AI Risk
No matter how good your governance, teams will find ways to use AI vendors without approval. Product teams will spin up an OpenAI account and build a feature. Security teams will use Claude for analysis. Engineering will use GitHub Copilot. Marketing will use a generative AI tool.
Address this proactively:
- Publish a list of "approved vendors" and make the approval process fast for vendors that are already approved.
- Use network monitoring to detect AI API calls going to unapproved vendors. When detected, contact the team and move the use to an approved vendor.
- Include shadow AI in your quarterly vendor registry review. Ask teams: "What AI vendors are you using that aren't in the registry?"
- Make it easy, not hard. If your approved vendors are expensive or slow to access, teams will find workarounds. Keep approval overhead low for low-risk use cases.
Red Flags That Should Kill an AI Deal
Here are specific contractual and commercial red flags. If a vendor won't negotiate on these, walk away.
Critical Red Flags (Deal-Killers)
- No opt-out for training data usage. If the vendor reserves the right to use your inputs to train future models and won't offer an opt-out, this is unacceptable. You lose the ability to keep proprietary information proprietary. Walk away.
- Unlimited liability cap escalation. Some vendors include "most favored customer" clauses that tie your liability cap to the highest liability cap anyone negotiated. This can create unlimited exposure. Reject.
- No IP indemnification. If the vendor generates output that infringes a third party's IP and won't indemnify you, this is a problem. Most vendors now offer this. If they don't, negotiation is critical.
- Automatic and unlimited price escalation. If pricing can increase >10% annually with short notice (30 days), this creates budget unpredictability. Negotiate a cap or longer notice period.
- No data deletion commitment. If the vendor won't commit to deleting your data within a specific timeframe after termination (30 days is standard), this is a compliance risk. Reject.
- No model rollback rights. If you can't stay on an older model version while you transition to a new one, the vendor is forcing you to bear all deprecation risk. Negotiate minimum 90-day rollback support.
High-Priority Red Flags (Require Negotiation)
- Liability cap below 12 months of fees. Standard is 6 months. Push for 12 months, especially for mission-critical use.
- No performance SLAs beyond uptime. Vendors often promise 99.9% uptime but don't commit to latency or accuracy. For AI, these matter more. Negotiate SLAs for p95 latency, accuracy on your benchmark, and response rate.
- Sub-processor visibility gaps. If the vendor won't disclose who has access to your data or how many sub-processors are involved, this is a risk. Negotiate for a published sub-processor list updated at least quarterly.
- Lack of data residency options. If the vendor can only process data in one region and that's not your region, this is a compliance risk. Negotiate for your region or closest alternative.
- No termination for convenience. If you can only terminate for cause, you're locked in. Negotiate for termination for convenience with 90 days notice.
Medium-Priority Red Flags (Negotiate if Possible)
- Lack of volume or committed-use discounts. These are table-stakes for enterprise AI. If the vendor doesn't offer them, they may be immature or unable to handle enterprise scale.
- Pricing opacity or inability to model costs. If you can't predict what you'll be charged in advance, this creates budget risk. Require transparent pricing with cost modeling tools.
- Limited compliance certifications. If you need SOC 2, ISO 27001, or HIPAA, the vendor must have it or be on a roadmap to achieve it. Don't accept "we can probably get certified later."
- Model versioning chaos. If the vendor has released 10 model versions in the past year and deprecated old models quickly, this creates operational burden. Negotiate for longer deprecation cycles (12 months minimum).