The number of AI tools targeting sales teams has exploded in the last two years. Conversation intelligence, AI SDRs, predictive lead scoring, automated proposal generation, CRM assistants, email sequence builders, call coaching, win probability models. The list is long and it gets longer every week.
Most sales leaders I talk to are somewhere between overwhelmed and sceptical. Some have bought several tools and aren’t sure any of them are working. Some are ignoring the whole category and hoping it settles down. A few are getting genuine value from a small number of well-chosen tools.
The third group has something in common. It’s not that they found better tools. It’s that they were clear about what problem they were solving before they went looking for a solution.
The distraction is the main risk
The genuine risk of the AI tools explosion isn’t that you’ll pick the wrong tool. It’s that you’ll spend time and attention evaluating, trialling, onboarding, and abandoning tools when that time would be better spent on the fundamentals.
A sales team that doesn’t have a documented qualification process, a reliable CRM, and a pipeline that reflects reality doesn’t need better technology. It needs those three things. The technology will still be there in six months. The business problem compounds every month you don’t address it.
I’ve seen this go wrong in a specific pattern: a founder reads about an AI tool that promises to solve a symptom (let’s say “reps aren’t following up consistently enough”), buys the tool, onboards the team, watches adoption stall because the reps don’t see the value, concludes the tool doesn’t work, moves on. The actual problem (no structured follow-up process, no accountability for next steps) is still there.
The tool wasn’t wrong. It was applied to the wrong problem in the wrong order.
A framework for evaluating what matters
When I’m looking at any AI tool for a revenue function (for my own use or for a client), I ask three questions.
What specific problem does this solve? Not a category of problem (“improving pipeline visibility”) but a specific, measurable problem (“we don’t know why deals in the proposal stage are dying before they close”). If you can’t name the specific problem, you can’t evaluate whether the tool solves it.
Do we have the data quality and process discipline to use it properly? AI tools that operate on your CRM data are only as good as that data. AI tools that sit in your sales workflow are only as good as your workflow. A predictive lead scoring tool pointed at a poorly-segmented, inconsistently-maintained database produces confident-sounding noise. The tool isn’t failing; the foundation is.
What’s the cost of getting it wrong? Some tools have a low cost of failure: you try them, they don’t work, you stop. Others embed themselves into the workflow in ways that are hard to undo, create dependency, or distort your data in ways that take months to clean up. Weight those risks honestly before you start.
The tools that tend to actually work
This isn’t a product review. I’m not going to name specific vendors because the market is moving fast and any recommendation I make today might be outdated in six months. But there are categories of tools where I consistently see genuine value, and categories where I’m more sceptical.
Genuine value, consistently:
Conversation intelligence: recording, transcribing, and analysing sales calls. Useful for coaching, for identifying patterns across a large call volume, for onboarding new reps. The main requirement is that someone actually reviews the analysis and acts on it. Tools that produce insights nobody reads are just expensive storage.
AI writing assistance for documentation and playbooks. The drafting speed improvement is real and the quality ceiling is high if you’re willing to edit properly. This is where I’ve seen the most consistent ROI, not because the AI writes better than good humans, but because the iteration speed means you get to a better output faster.
CRM data analysis and anomaly detection. Identifying deals that haven’t moved in too long, flagging records that don’t have required fields, surfacing patterns in win/loss data. Useful if the CRM is in reasonable shape. Counterproductive if it isn’t.
More sceptical about:
AI SDRs and fully automated outbound. The economics look attractive until you factor in deliverability degradation, the reputational cost of bad personalisation, and the compliance risks in markets with strong email regulations. The best outbound is still high-specificity, high-relevance, and that requires human judgment about what’s specific and relevant for a given prospect.
“AI coaching” tools that provide automated feedback on rep performance. Useful in a limited way for pattern recognition at scale. Not a substitute for real sales coaching from someone who knows the business and can have a genuine conversation with a rep about what’s not working and why.
Predictive close probability. Sounds extremely useful, rarely is. Predictive models are only as good as the historical data they’re trained on. Most companies don’t have enough clean, consistent historical data for the models to be reliable. The output looks precise and often isn’t. Forecast accuracy problems are almost always a process and data quality problem before they’re a tooling problem.
What I actually recommend
Start with the problem, not the tool.
If you have a documented qualification process that the team consistently applies, a CRM that reflects reality, and a pipeline you can forecast from, then there’s a strong case for adding AI tools that build on those foundations. The investment will compound.
If you don’t have those things, build them first. The AI layer will still be there. It’ll be more useful once there’s something worth accelerating.
And when you do start adding tools, start with one. Not because you should be cautious about technology, but because every new tool requires adoption time, configuration work, and attention. Three tools that nobody uses properly are worse than one tool that’s embedded and working.
The businesses I work with that are getting real value from AI in their revenue function all made the same choice: they got the foundations right, then picked one or two tools that addressed specific, demonstrated problems, and they used them properly. That’s it. It’s not more complicated than that.
The noise around AI in sales is loud. The signal (what actually works, for what problems, in what contexts) is quieter. But it’s findable, if you start from the problem rather than the solution.
If you want to understand the foundations that need to be in place before AI tools add real value, What RevOps Actually Means covers the full infrastructure picture. And the AI & RevOps page covers how I integrate AI tools into revenue operations in practice — with the specific things that work and the things that don’t.