Vague scope is how AI projects die. Not usually in a dramatic blowup — more often in a slow bleed of requirement changes, feature additions, and "while you're there" requests that consume margin and goodwill simultaneously.
Good scoping isn't about protecting yourself from the client. It's about protecting the project — and the client — from the natural chaos of delivery. Here's the framework that works.
Phase 1: Discovery before commitment
Never scope an AI engagement without a paid discovery phase first. This is non-negotiable for any engagement of meaningful size.
A discovery phase — typically 5–15 days depending on scope — lets you assess data readiness, stakeholder alignment, technical infrastructure, and business case validity before either party commits to delivery. It protects the client from starting a project that isn't ready. It protects you from committing to deliverables you don't yet understand.
Price the discovery phase to cover your time at full rate. It is not a loss-leader. Clients who won't pay for discovery are signalling something important about how they'll behave during delivery.
Discovery questions that matter
In a properly structured discovery, you're answering these before you scope anything else:
- What specific operational problem are we solving, and what does success look like in measurable terms?
- What data exists, where does it live, who owns it, and what's its quality?
- Who is the executive sponsor and what is their level of commitment to this project?
- What has been tried before, and what happened?
- What are the regulatory, compliance, and governance constraints?
- Who will use the output and what does their current workflow look like?
- What is the actual budget and how was it decided?
Phase 2: Writing the SOW that protects you
A statement of work for an AI engagement needs to be more specific than most consultants are comfortable writing. Vagueness that feels collaborative in the SOW becomes a dispute in month three.
Define scope in terms of outputs, not activities
Bad scope: "Develop an AI model to improve customer service response times."
Good scope: "Deliver a classification model that routes incoming support tickets to the correct team, trained on the client's historical ticket data (minimum 50,000 labelled records), with a target accuracy of ≥85% on the held-out test set, deployed to [specific environment], with documentation sufficient for internal maintenance."
The second version is unambiguous. Everyone knows what they're getting. The client can't reasonably ask for a chatbot, a sentiment analysis tool, and a predictive churn model under the same statement of work.
The in-scope / out-of-scope table
Every SOW should have an explicit table of what's in scope and what's out. List both. The out-of-scope list is not adversarial — it's a shared reference that prevents misunderstandings from compounding. Common items to explicitly exclude:
- Data cleaning and preparation beyond [X hours / Y records]
- Ongoing model maintenance and retraining after handover
- Integration with systems not listed in the SOW
- Change management and end-user training
- Regulatory compliance sign-off
Client dependencies and deadlines
AI projects routinely slip because of client-side delays: data that arrives late, stakeholders who can't attend workshops, environments that aren't provisioned. Your SOW should specify what you need from the client, by when, and what happens if those dependencies aren't met.
A simple clause: "If client deliverables listed above are delayed by more than [5] business days, the project timeline extends by an equivalent period and a revised delivery date will be agreed in writing."
Phase 3: Change control that people actually use
Every SOW needs a change control process. Most consultants include a vague paragraph about it. Almost none implement it correctly.
Change control works when it's low-friction enough to use, but formal enough to create a record. A simple process:
- Client or consultant identifies a change to the agreed scope
- Consultant produces a brief change request note: what's changing, why, impact on timeline and cost
- Both parties sign or email-confirm the change before any work begins
- Change requests are logged in a shared document
The cultural problem with change control is that it can feel confrontational. The reframe: it's a service to the client. It gives them visibility and control over decisions that affect their budget. Frame it that way in kick-off and you'll use it without friction.
Phase 4: Acceptance criteria and sign-off
Define, in the SOW, what constitutes successful delivery. Not "to the client's satisfaction" — that's a recipe for an infinite loop. Specific, testable criteria that both parties agree to before work starts.
For an AI model: accuracy thresholds, latency requirements, data coverage, documentation standards. For a strategy deliverable: structure, depth, stakeholder review process, number of revision rounds included.
Include a sign-off timeline. "Client will review and accept or provide written feedback within [10] business days of delivery. Silence after [10] business days constitutes acceptance." This prevents deliverables sitting in a limbo that delays your invoice.
Ready-made scoping documents. The SOW Toolkit and Agentic AI Scoping Pack in the Wrecked Shop are built from real engagements — not templates invented for the purpose of selling a download.
Understanding why projects go wrong
Even with good scoping, delivery has failure modes. Why AI projects fail →