Your CRM Didn’t Drive That Revenue. Prove It.
- Andrew Goldstein
- Feb 23
- 3 min read
Since moving into consulting, I’ve been watching how lots of CRM 'consultants' define “success” and there’s a clear pattern emerging. Big revenue claims, bold percentage lifts, screenshots of attributed sales and increasingly confident statements about AI driving performance gains. But here’s the uncomfortable question that rarely gets asked; was it the investment in AI that caused the uplift, or was it your very hard working undervalued CRM team curating exceptional content? Or was it seasonality, discounting, stock availability, paid media support or a broader improvement in consumer spending? Very few of these performance narratives mention control groups, which leads to the only question that really matters: how do you know your email (or your AI, SMS, Whatsapp message, in-app notification) actually caused that revenue?
Revenue Went Up ≠ Your Email Worked
If sales increased after you sent a campaign, that does not automatically mean the campaign drove the increase. It might have, but it also might have been factors such a seasonality, heavy discounting, stock availability, paid media ramping up, environmental improvement in spending, etc. Without a control group, you are reporting correlation, not causation. In the current commercial environment, that distinction matters. As commercial teams and executive leaders panic and want more and more campaigns and volume sent through to their customer base, it’s so important to be able to demonstrate the true value (or lack thereof) of such communications.
Attribution Is Comfort. Incrementality Is Truth.
Here’s where it gets more interesting… Many sales reports lean heavily on attributed sales. Email touched the journey? At which point in time? Partial credit. Customer saw paid media. Full credit. Attribution models distribute revenue across all channels. Quite often these attribution models organically favour channels such paid search of affiliate channels because they capture the final intent-driven interaction. Attributed sales is somewhat useful for understanding interaction between all channels. Attribution does not answer the only question that really matters, would the customer have purchased anyway? Incrementality does. When you measure uplift versus a control group, you isolate the effect of the communication itself. You create a “what if we did nothing?” benchmark. Attribution tells you where revenue appeared, incrementality tells you whether you created new revenue. Those are not interchangeable metrics.
The Unpopular Discipline
Control groups are uncomfortable because they require sacrifice. You deliberately suppress a portion of your audience from receiving a campaign. To the “volume crowd” (the stakeholders obsessed with contact volume,) this feels totally reckless. “Why wouldn’t we send to everyone?? You’re leaving money on the table!” Ahh, No. We are protecting future investment in our CRM strategy. Without incrementality measurement, you cannot see if your campaign genuinely works, whether your offer is too rich, whether you’re cannibalising future purchases, or whether customers are buying because of you. Control groups aren’t anti-growth. They help facilitate optimal and efficient longer term growth and help manage CRM channel efficiencies.
How Big Should a Control Be?
There’s no magic number. It depends on expected response rate, offer strength, audience size and how quickly you need statistically relevant results. Here’s the commercial reality; the larger the control group, the faster you reach significance. If you want rapid clarity, increase the holdout. If you want to protect volume, accept slower learnings. That’s the trade-off. Not measuring should not be an option if you want optimisation.
The Real Risk
CRM teams today are under pressure to prove incremental sales and justify serious mar-tech and resource investment. If CRM wants a seat at the commercial table (not just the send button) it must move beyond pure sales attribution. Control groups are going to be more important than ever when to accurately test the impact of AI on CRM; testing generic (but rich) content vs. personalised/dynamic content created by AI is a robust test of the actual incrementality that AI delivers. The most important CRM metric is simple; what happened versus doing nothing? That’s the number that tells you what to scale up, what to stop, what to refine, where to remove discount or offers and where to increase contact.
My Final Thought
My father (a retired Chartered Accountant of over 50 years) taught me something very important when I was growing up, “whatever you can measure you can manage.” If you can’t prove uplift versus control, you’re not measuring performance, you’re narrating it. In an environment where budgets and resources are being scrutinised, AI is being layered into every platform and CRM is expected to deliver incremental growth, storytelling isn’t enough. Control groups aren’t a technical preference; they are commercial discipline. They are the difference between optimisation and assumption, between efficiency and noise, between real strategy and a well-designed PowerPoint. Revenue rising is not proof. Attribution is not proof. Volume is not proof. Incrementality is proof. If CRM wants to be treated as a strategic growth engine (not just a distribution channel) then we have to hold ourselves to that standard. Prove it.



Comments