Before You Buy AI
Reading time: ~6 minutes
AI ROI comes with finding the right problem to solve. First clarify the business outcome you need, then introduce the smallest change that reliably delivers it. Technology considerations come last.
The Right Approach
AI projects don’t necessarily stumble because the technology can’t handle it.
Amongst discussions around a recent MIT study on failing AI projects, research such as the one conducted by the RAND Corporation in 2024 point towards not understanding what problem needs to be solved or a focus on technology rather than outcome to be some of the main reasons why AI projects fail.
If you can’t state the decision you’re improving and how you’ll recognize success, you’re not ready to shop for tools.
Think in this order:
outcome → workflow decisions → right‑sized change → tool.
What It Looks Like
None of the below requires deciding on a specific technology yet. It may turn out that AI might not be what you need. Clarity comes first.
Outcome clarity
Decide on a single metric that matters (e.g., cycle time, error rate, cost per case) and agree on how it will be measured.
Decision focus
Map the decisions in the process, not every click. Make hand‑offs explicit.
Right‑sized ambition
Start with assistance (drafts, triage, summarization). Expand to partial autonomy only after trust is earned.
Controls by design
Use confidence thresholds, human review on high‑impact moments, and keep an audit trail.
Cost predictability
Care about cost per successful task, not just list prices.
After-state clarity
Define how work will flow post-AI - touchpoints, hand-offs, and exception paths - so the human-in-the-loop is right-sized, not a permanent bottleneck.
Portability
Plan to exit before you enter, your data and prompts should be able to move with you.
Design the After-State
AI changes the work as much as the outcome.
Before you pick a tool, sketch the after-state of the workflow: who touches what, when exceptions surface, and how approvals happen. The goal isn’t ‘more AI’, it’s less hidden friction.
Watch for review creep (too many humans pulled into approvals), exception debt (edge cases piling up), and swivel-chair steps (copy/paste between systems).
If the after-state adds coordination overhead, the ROI will evaporate.
Where AI Usually Fits First
For the best use cases of AI today, look for repeatable, mid‑risk decisions where the outcome is easy to see:
Classification & routing ("what is this and where does it go?")
Summarization of long threads, documents, or tickets for faster human decisions
Drafting & enrichment (first drafts, standard replies, structured notes)
Start assistive, but define the exit condition for human review, or you’ll tax your team with a permanent approval queue.
Prove value. Then consider expanding.
Three Expensive Traps to Avoid
1) Tech‑first shopping. Demos are persuasive, but without a clear win condition you end up solving the wrong problem.
2) Hidden human costs. Behind every “automation” is a person cleaning edge cases. Dependent on the area and necessary guardrails, that can mean a lot (e.g. where compliance is an issue). Count that effort or your ROI math will be fantasy.
3) Vendor lock‑in. Great features can mask poor portability. Ask about exports, retention, and model control early.
Eight Questions to Align Your Team
Use these to align leadership and de‑risk decisions. (There are no right answers, only honest ones.)
What decision are we improving, and how will we recognize “better” in the next month?
If the AI is occasionally wrong, what happens and how reversible is it?
Which moments must keep a human in the loop (for now)?
What’s our acceptable payback window and how will we measure it?
If we had to switch vendors in six months, what must be portable?
In the after-state, which steps disappear, which move earlier/later, and which still need a person?
What’s our exception policy (who reviews, how fast, with what info) so review queues don’t grow quietly?
If usage doubles, where will friction show up first and how will we notice?
Friction Audit (Quick Lens)
What new handoffs did we just create?
Where can we batch or auto-approve low-impact items?
How do we close exceptions without starting a side process?
(If you can’t answer in a paragraph, you’re not ready to scale.)
A Vendor Conversation That Protects You
Keep it simple. You’re not auditing; you’re seeking clarity.
Data use: Can you disable data retention and any training on our inputs?
Portability: If we leave, what can we export (prompts, logs, configs) and in what format?
Costs: How do we cap spend, and what’s the worst‑case month at our forecasted volume?
Reliability: What happens on timeouts or low confidence?
Fit: Name two customers with processes and volumes similar to ours.
Your goal is to check for transparency and control, not perfect answers.
Getting Started
Start where the work is repeatable, the impact is visible, and the risk is reversible.
Make the change small and measurable.
Share results with the team. Let confidence build before you widen the scope.
You don’t need a massive program to see value. You need a crisp problem and a sensible first step.
What We Can Do Together
If you want help pressure‑testing the opportunity, I run focused working sessions for SMB teams. We’ll:
Clarify the business outcome and the decision(s) that drive it
Identify a right‑sized starting point with the necessary guardrails
Shape a pilot you can believe in, without committing to a specific tool upfront
You’ll leave with a clear picture and practical next steps. The detailed playbook lives inside the engagement. You get results without having to build a methodology yourself.
We’ll pressure-test the after-state with you, so AI removes work instead of moving it around.
Interested?
Book a free 20‑minute call to discuss your specific situation.
Want to learn more about how AI works first?
I offer 1:1 and team coaching sessions to get you started on the road to choosing the right solution that works for your business.
Final Thoughts
While these insights are not new to project and product management, the label of ‘intelligence’ can sometimes mislead us into overestimating AI’s capabilities.
AI delivers value only when it improves a specific decision’s speed, cost, or accuracy - consistently. Define the decision and the guardrails; the tool then becomes a choice, not a gamble.