Financial Services
iGaming
Retail & eCommerce
Email
SMS
Customer Segmentation
Journey Orchestration
Multichannel Marketing
Email Marketing

Would Customers Have Bought Anyway, Even Without This Campaign?

Attribution vs. incrementality: a maturity model for measuring what your campaigns actually create

Read time 9 minutes

LinkedInXFacebook

Consumer Marketing Fatigue Report

Would our customers have bought anyway, even without this campaign? This is a question marketing teams avoid asking, not because they don’t care about the answer, but because they’re not sure they have the tools to find it. 

It was at the center of a session led by David Hardy and Alice Andreo at Optimove Connect 2026, where they unpacked one of the most important measurement challenges in CRM marketing: the difference between getting credit for a customer action and actually causing it. 

This post builds on that discussion, exploring why attribution alone can be misleading, how incrementality reveals true campaign impact, and what teams need to do to make measurement part of their everyday marketing operation.

The Attribution Incrementality Gap: Where Marketing Budgets Dissapear

Attribution is the practice of giving campaigns credit for customer actions, such as a purchase, a sign-up, or a return visit.  

Incrementality goes one step further and asks whether the campaign actually caused that action, or whether the customer would have done it anyway. 

The gap between those two things is where marketing budgets quietly disappear. 

The Credit Problem

Picture this: a customer receives a reactivation email on Monday, a push notification on Wednesday, and a promotional SMS on Friday. They make a purchase on Saturday. 

Which campaign worked? Depending on how you measure it, you’ll get a different answer. Give all the credit to the last message they received, and that’s last-touch attribution. Split the credit evenly across every touchpoint, and that’s multi-touch attribution.  

But neither method answers the question: would they have purchased it on Saturday regardless of what you sent? That is the limitation of attribution on its own. It tells you what happened. It does not tell you what you caused. 

Incrementality fills that gap. Rather than dividing credit across campaigns, it establishes a baseline: what customers do when they receive nothing. Then it measures the difference between that baseline and the results from the campaign group. 

The revenue above the baseline is what the campaign actually created. Everything else would have happened anyway. 

The Control Group: How To Find the Baseline

The most reliable way to measure incrementality is to run a holdout test. 

You take a small portion of your target audience, usually around 10 to 20 percent, and deliberately do not send them the campaign. This group becomes your control group. They show you what customers do when left alone. 

Here’s a simple example: 

  • What the control group generated with no campaign: $750  
  • What the campaign group generated: $1,250  
  • Incremental uplift, or the value the campaign actually created: $500

That $500 is the number that matters. Not the full $1,250, because $750 of that revenue was likely to happen whether or not the campaign was sent. 

In mature CRM programs, control groups should be easy to apply by default, not treated as a special project every time a team wants to measure impact. The strongest approach uses stratified random sampling; a technique also used in clinical trials and political polling. That means the holdout group is selected to reflect the full mix of the audience, including high-value customers, dormant customers, new customers, and everyone in between. 

That matters because a poorly built control group can distort the results. The more seamlessly this setup is built into the campaign workflow, the more likely teams are to use it consistently. 

What Happens When You Don’t Measure Incrementality

When teams do not measure incrementality, campaigns can look more successful than they really are. 

A promotion may generate revenue, but that does not mean it created the revenue. It may simply have reached customers who were already likely to buy. A reactivation message may get credit for bringing a customer back, even if that customer was going to return on their own. A weekly newsletter may be treated as routine, even though it could be quietly driving more value than a larger, more expensive campaign. 

Without a control group, you cannot see the difference. 

That creates two risks. First, you may keep investing in campaigns that are taking credit for behavior they did not influence. Second, you may underestimate campaigns that look ordinary on the surface but are creating meaningful incremental value. 

Attribution without incrementality can make assumptions look like results. 

From One Campaign to Your Whole Marketing Operation: A Maturity Model

Measuring incrementality at the campaign level is the starting point. But the real value comes when incrementality becomes part of how the whole marketing operation runs: 

Level 1: Does this campaign work? 

At the most basic level, every campaign should have a control group, and every campaign should produce an uplift number: positive, negative, or neutral. That gives teams a clearer way to decide what to do next. 

If the campaign produces positive uplift, scale it. Broaden the audience, increase frequency, or replicate the approach elsewhere. If the campaign produces a negative uplift, fix it. The campaign is not just failing to help. It may be hurting performance. Look at the offer, creative, timing, or audience strategy. 

If the campaign produces no uplift, cut it. It may not be damaging, but it is not creating value either. That budget and time can be redirected toward campaigns that actually move the needle. 

Most teams do not have the data to make those calls confidently. They are often looking at total revenue, which includes sales that would have happened anyway, instead of incremental lift above baseline. 

Level 2: Does this whole sequence drive value? 

Customers do not experience your marketing as isolated campaigns. They move through journeys over days, weeks, and months: a welcome series after sign-up, a reactivation flow after inactivity, a seasonal program around a major event, or an always-on promotion designed to build long-term engagement. 

That means marketers need to measure more than individual sends. They need to understand whether the full journey is creating incremental value. 

The next step is measuring the impact of an entire journey as a unit, not just one campaign at a time. Teams can also test journey variations against each other. Does a welcome journey with a newsletter at step three perform better than one without it? Does one offer type drive stronger uplift than another? Does a particular sequence work better for one customer segment than another? 

Instead of relying on instinct, teams can answer those questions with measured uplift. 

For always-on campaigns and recurring promotions, tagging adds another layer of insight. Teams can label campaigns by objective, promotion type, season, audience, or lifecycle stage. With consistent tagging and reporting, they can group performance across campaigns that share the same tag. 

What was the total incremental value of all Black Friday campaigns over the last two years? Which offer type works best for lapsed customers? Which lifecycle programs consistently create lift? 

The answers depend on consistent tagging, clean measurement, and a commitment to looking beyond surface-level performance. 

Level 3: Is our whole marketing engine creating real value? 

This is where incrementality stops being a reporting exercise and becomes the foundation for optimization. Optimove’s AI Decisioning Agents can help resolve campaign conflicts automatically. When a customer qualifies for multiple campaigns at the same time, the AI can select the one most likely to be right for that customer, rather than relying on random selection or a manual priority list. 

But the AI is only as good as the data it learns from. 

When every campaign runs with a control group and produces uplift data, the system can learn which campaigns work, which audiences respond, and which messages create real incremental value. Campaigns with positive lift can be shown to more relevant customers. Campaigns with negative or neutral lift can be suppressed where they are unlikely to help. 

The result is a marketing operation that improves over time. 

But that only works if the measurement foundation is clean. Control groups need to stay on. Exclusion rules need to be in place, so customers are not exposed to too many campaigns at once. Campaigns need to be tagged consistently, so patterns can be detected at scale. 

Two Settings That Make It All Work

Most of this depends on two things being consistently in place. 

1. Control groups, always on 

Every campaign is, in effect, an experiment. It has the potential to change customer behavior. If there is a potential impact, marketers need to know what that impact is. 

That applies to major promotional campaigns, but it also applies to routine communications. Sometimes the campaigns that seem the most ordinary are the ones creating the most value. Without control groups, there is no way to know. 

2. Exclusion rules 

If a customer receives three campaigns in a week and then converts, it becomes difficult to know which message influenced the action, or whether any of them did. 

Exclusion rules help keep measurement clean by ensuring customers receive one campaign at a time. This is not a constraint on marketing. It is what makes marketing measurable. 

Measurement Isn’t a Step. It’s the Whole Process.

The bigger change here is cultural as much as technical. 

Measurement should not be something teams do after a campaign ends to explain the results. It should shape the decisions they make before, during, and after every campaign: which audiences to target, which journeys to build, which offers to prioritize, which campaigns to scale, and which ones to stop. 

When incrementality is embedded into the process, teams stop asking, “Did this campaign perform?” That answer is always available. 

They start asking a better question: is our marketing actually creating value, or are we just taking credit for things customers were going to do anyway? 

That is the question that separates good CRM marketing from great CRM marketing. And with the right measurement infrastructure, it is one marketers can answer every day. 

Watch the full Optimove Connect session to learn how incrementality helps marketing teams measure what they actually create. For more insights, contact us to request a demo.

Discover actionable strategies to combat fatigue, boost engagement, and build lasting loyalty

Optimove Team

Writers in the Optimove Team include marketing, R&D, product, data science, customer success, and technology experts who were instrumental in the creation of Positionless Marketing, a movement enabling marketers to do anything, and be everything.

Optimove’s leaders’ diverse expertise and real-world experience provide expert commentary and insight into proven and leading-edge marketing practices and trends.

Learn more, be more with Optimove
Check out our resources
Discover
Join the Positionless Marketing movement
Join the marketers who are leaving the limitations of fixed roles behind to boost their campaign efficiency by 88%