← Back to Thinking

How I Actually Use AI in RevOps Work

Not the theoretical version. The specific ways AI tools (Claude primarily) show up in my day-to-day RevOps and fractional commercial leadership work.

I’ve been asked about this enough times that it’s worth writing down properly. How AI actually shows up in the work: not the aspirational version, not the conference-deck version, but the specific things I do differently now than I did two years ago.

The short version: I use AI tools, primarily Claude, in almost every engagement now. Not because it’s a trend I’m chasing, but because a specific set of tasks that used to take a long time now take a fraction of the time, and the output is better for the iteration capacity that frees up.

Here’s what that actually looks like.


CRM audits and data analysis

When I start a new engagement, one of the first things I do is pull the CRM data and figure out what it’s actually telling me, and where it’s lying.

This used to mean hours of manual work: exporting data, building pivot tables, cross-referencing deal records, trying to spot patterns across hundreds of rows. It was necessary but slow. The analysis still required judgment (knowing what to look for, understanding what the anomalies meant), but the mechanical part was a grind.

Now I’ll take a data export, describe what I’m looking at to Claude, and ask specific questions: what do the stage progression times look like and where are the outliers? Where are deals clustering before they drop off? What’s the relationship between deal size and close rate? What does the win/loss pattern suggest about the ICP?

Claude doesn’t make the judgment calls. I do. But it handles the pattern recognition and the initial structuring of the analysis in a fraction of the time. What used to take a day now takes a couple of hours. That’s not a trivial improvement; in a fixed-scope engagement, it’s an extra day of building rather than analysing.


Qualification frameworks and sales process documentation

This is probably where I use AI most, and where the productivity gain is most visible.

Writing a qualification framework from scratch, specific to a company’s buyers, deal dynamics, and existing team capability, used to be an iterative process that took most of a day. First draft, refine based on the founder’s reaction, iterate again based on the team’s feedback, make it actually usable rather than theoretically correct.

The first draft is now much faster. I describe the business, the buyer, the typical deal shape, the things the best deals have in common, and ask for a structured framework based on those specifics. What comes back isn’t perfect (it needs editing, it needs the nuance that only comes from knowing the specific business), but it’s a solid foundation to work from. The iteration happens faster because the starting point is better.

Same for discovery question guides, objection handling documents, pipeline stage definitions with exit criteria, onboarding playbooks. The work that used to be 70% drafting and 30% refining is now 20% drafting and 80% refining. That ratio change matters more than it sounds.


Pipeline review preparation and follow-through

Pipeline reviews are where revenue functions either get disciplined or get sloppy. The review itself needs to happen; I can’t delegate that to AI. But there are two parts of the pipeline review process where AI has genuinely changed my workflow.

Before a review: I’ll pull the current pipeline and have Claude produce a structured summary: deals by stage, changes since last week, deals that haven’t moved in a certain period, any anomalies worth raising. It’s a five-minute version of what would otherwise take 20 minutes of manual preparation. I arrive at the review with the right questions already identified.

After a review: call notes, agreed next steps, commitments made, updates that need to go back into the CRM. I used to do this manually and it was always compressed, always imperfect. Now I’ll talk through what happened in the review (what got discussed, what was committed to, what changed on each deal) and have Claude structure it into a review summary and a set of CRM update notes. More consistent, more complete, and the reps actually get clear written follow-up rather than relying on memory.


Board reporting and commercial commentary

Board packs are a specific kind of writing: they need to be accurate, clear, credible under scrutiny, and not padded with noise. Getting that right consistently, every month, is more work than most fractional engagements allow for properly.

I use Claude to draft the commercial narrative sections of board reports. Not to invent the story: I know what the numbers are saying and I know what matters. But to get from “here’s what happened and why it matters” to a properly structured, well-written section in a quarter of the time.

This is one of those uses where the output has actually improved, not just the speed. Having a strong first draft to react to (to cut, to tighten, to correct where the AI’s interpretation misses context) produces better final writing than starting from a blank page.


What I don’t use it for

Judgment calls. Anything where the answer depends on reading a specific person, a specific team, a specific founder’s psychology. The difficult conversations: telling someone their pricing is wrong, or that the problem is how they’ve been selling, or that the engagement has run its course.

Research into things that require current, specific market knowledge. I don’t trust AI-generated competitive intelligence at face value, and I won’t deploy it to clients without verification.

And I don’t use it as a substitute for actually learning a business. The value I bring comes from understanding a specific company’s situation (the buyers, the product, the team, the history) and making good decisions based on that. No amount of AI tooling replaces the time that understanding takes to build.


The honest version of the productivity claim

The time saving is real and it compounds. Not just within a session, but across an engagement. Tasks that used to constrain how much got built in 12 weeks now get done faster, leaving room for more iteration, more depth, better output.

But it’s a productivity gain in execution, not a replacement for what the work actually requires. I still need to understand the business, make the judgment calls, have the difficult conversations, own the accountability. The difference is that more of my time goes there, and less goes on the mechanical layer around it.

That’s worth having. And it’s only going to matter more as the tools improve. The question for every revenue function isn’t whether to use AI; it’s whether the foundations are solid enough that the AI layer will actually help.

Most of them need the foundations first.


For the argument for why foundations have to come before the AI layer, AI Won’t Fix a Broken Revenue Function covers it in full. And for the broader picture of how AI fits into a revenue function, the AI & RevOps page covers the principles, the prerequisites, and the specific use cases where it earns its place.