Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI procurement: the questions to ask before signing a contract

The crucial mistake is focusing on what the tool can do today, rather than on what you are entitled to know, control and audit tomorrow

Mr Moonlight profile image
by Mr Moonlight
AI procurement: the questions to ask before signing a contract
Photo by Shutter Speed / Unsplash

The most common failure in AI procurement is treating a system that can change its behaviour as if it were static software. A tool performs well in a pilot, procurement signs on familiar terms, and only later does the organisation realise it cannot clearly answer basic questions. Where did the data go? Who can see it? What happens when the model changes? Who is responsible when something goes wrong? By then, the leverage has gone.

Artificial intelligence procurement is not a purchasing exercise in the narrow sense. It is a governance decision that fixes risk, responsibility and control long after the contract is signed. The crucial mistake is focusing on what the tool can do today, rather than on what you are entitled to know, control and audit tomorrow.

Start with what you are actually buying

The word “AI” hides important differences. A software-as-a-service product built on a third-party model creates a long dependency chain and limits auditability. A dedicated enterprise instance offers more control but shifts operational responsibility onto the buyer. On-premises or private deployments increase data control while raising security and lifecycle burdens. Even AI features embedded quietly inside other products can introduce compliance risk if data flows and model updates are opaque. Procurement questions only make sense once this is clear.

Data handling is where most risk lives

Every procurement should begin with a simple demand: a clear map of data flows. What data is sent to the vendor, what is stored, for how long, where it is processed, who can access it, and whether it is used for training, service improvement, analytics or human review. Prompts and outputs should be treated as sensitive by default, because they often encode business context even when the input appears harmless.

Contracts should constrain these flows explicitly. Training on customer data should be off by default. Logging should be minimised, configurable and time-limited. Subprocessors should be disclosed and controlled. Deletion should be verifiable. If a supplier cannot give precise, testable answers on these points, the safest decision is not to proceed.

Security is not just a checklist exercise

Standard controls still matter: strong identity and access management, encryption, segregation between customers, secure development practices, vulnerability management and clear incident notification duties. But AI adds new attack surfaces. Prompt injection, data exfiltration through connectors, misuse of automation features and leakage across retrieval systems all need explicit controls. Procurement should assume the tool will be targeted and ask how the vendor prevents, detects and responds to abuse.

Auditability determines whether governance is possible

If you cannot reconstruct what happened, you cannot govern outcomes. Organisations should expect access to logs of data access and administrative actions, records of model and configuration changes, and evidence of testing and monitoring. For higher-risk uses, such as systems that influence decisions or operate in regulated contexts, this extends to technical documentation, testing results relevant to the use case, and a clear allocation of responsibility across the supply chain.

Service levels need to reflect how AI fails

AI systems do not fail like conventional software. Output quality can degrade without downtime. Behaviour can change after an update. Procurement should look beyond uptime to include support responsiveness, advance notice of material changes, transparency around update cadence, and clear handling of degradation and fallbacks. Most vendors will not guarantee output quality in a legally meaningful way, so contracts and governance need to assume variability and design human oversight accordingly.

Pricing and lock-in are often underestimated

AI pricing frequently looks simple until usage grows. Token-based models can become unpredictable, per-seat pricing can collapse under broad adoption, and essential security or compliance features may sit behind additional fees. Just as important is exit. Proprietary workflows, closed evaluation tools and dependence on vendor-hosted components can make switching costly or slow. Exit planning belongs in procurement, not as a future technical clean-up exercise.

Intellectual property and liability need plain answers

Buyers should be clear about who owns outputs, what rights they have to use them commercially, whether inputs or outputs are used for training, and what protections exist if an output infringes third-party rights. These issues are often buried in terms, but they shape real commercial risk.

Plan for incidents that are not just breaches

AI incidents include security failures, but also harmful outputs, silent model regressions and sudden behavioural shifts. Contracts should set expectations for notification, investigation, evidence preservation, remediation and post-incident reporting. If the vendor cannot support you when something goes wrong, the tool is not enterprise-ready.

What minimum acceptable answers look like

A buyer should be able to reach a clear baseline before signing. The vendor should commit that customer data is not used to train shared models without explicit consent. Logging should be limited and controllable. Security controls should cover both traditional risks and AI-specific threats. Meaningful audit information should be available. Material changes should be notified in advance. Pricing should be transparent. Data export and deletion should be practical and verifiable. Output rights should be explicit. Incident response should be defined and tested.

If a supplier cannot meet these minimums, no amount of functionality compensates for the risk.

How to compare vendors without being distracted by polish

Procurement teams benefit from scoring answers against evidence and commitments rather than rhetoric. Weak answers cluster in the same places: vague data use language, assurances without audit rights, and exit terms that assume you will never leave. Vendors that score poorly on data handling, security or incident response should not progress, particularly for high-risk uses.

Contract clauses to insist on, in principle

Without straying into legal drafting, certain themes should always appear. Data use must be defined and constrained. Subprocessors must be controlled. Security requirements must be explicit. Audit and cooperation rights must exist. Model and policy changes must be managed. Service levels must reflect operational reality. Incident response must be clear. Intellectual property and indemnities must be understood. Exit, deletion and transition must be workable.

The bottom line

AI procurement fails when organisations buy capability without control. The contract is the mechanism that decides whether data stays where you expect it to, whether behaviour changes are visible, and whether you can leave when the tool no longer fits. The cheapest moment to manage AI risk is before signature. After rollout, you are no longer negotiating from strength.

Mr Moonlight profile image
by Mr Moonlight

Read More