Incrementality Testing: Proving Your Marketing Actually Works
Why you need to test if your ads cause sales—or if customers would have bought anyway.
Your Facebook ads are driving conversions. Your ad platform swears it.
But here’s the question nobody wants to ask: Would those customers have bought anyway?
Maybe they were already interested in your product. Maybe they searched for you on Google. Maybe they saw an email campaign and that’s what actually convinced them.
Your Facebook ad might have just been there for the ride.
This is called Attribution Bias. Every marketing channel claims credit for the sale, but the actual causation is unclear.
Incrementality Testing answers the question: “How many sales would not have happened without this marketing?”
The Attribution Problem
Let me walk you through the issue.
You run a Facebook ad campaign. 1,000 people see it. 50 click. 10 buy.
Your ad platform says: “Your ROAS is 5.0x! You made $5,000 from $1,000 in ad spend.”
But what if those 10 people were going to buy anyway?
What if they were already on your mailing list? Or they’d already visited your site? Or they were searching for your product?
The Facebook ad might have accelerated the purchase by a day or two, but it didn’t cause the purchase.
If that’s true, your real ROAS isn’t 5.0x. It’s 0x. You paid for something that would have happened anyway.
The Gold Standard: Randomized Testing
To know if your marketing causes sales, you need a holdout group.
Here’s how it works:
Setup:
- You have 10,000 people in your target audience
- You randomly split them: 5,000 in the “treatment group” (they see your ad) and 5,000 in the “control group” (they see no ad)
Run the test for 4 weeks:
- Treatment group: 50 conversions
- Control group: 25 conversions
Analysis:
- Treatment group conversion rate: 50 / 5,000 = 1%
- Control group conversion rate: 25 / 5,000 = 0.5%
- Incremental impact: 0.5% (50 - 25 = 25 incremental conversions)
So out of the 50 conversions in the treatment group, 25 were incremental (caused by the ad) and 25 would have happened anyway (baseline conversion).
Your real ROAS is 2.5x, not 5.0x.
(Of course, you still need to subtract the $1,000 ad spend and calculate profit, not just revenue. But at least you know causation.)
Why Ad Platforms Don’t Tell You This
Your Facebook ad manager will never suggest running an incrementality test.
Why? Because it might show that their ads don’t work as well as claimed.
If you run a test and discover that half your Facebook conversions are not incremental, you’ll shift budget to Google or email.
Facebook has financial incentive to keep you from knowing this.
So, they don’t mention it. They just keep showing you ROAS numbers that include the baseline.
The Challenges of Incrementality Testing
Sounds great, right? Just run a test and know the true ROI.
But there are complications:
1. Sample Size and Time
To get statistically significant results, you need enough people.
If your baseline conversion rate is 2% and your ad lifts it to 3%, you need about 15,000 people per group to detect that difference with confidence.
If you only have 5,000 potential customers, you can’t run a proper test.
This is why large brands run incrementality tests constantly, but early-stage startups rarely do.
2. The Cost of the Control Group
By holding out 50% of your audience, you’re leaving money on the table.
If those 25 control group people would have made $50,000 in revenue, you’re sacrificing $50,000 to learn the truth.
Sometimes that’s worth it (if you’re making the wrong $1M decision, learning costs $50k). Sometimes it’s not.
3. Time and Seasonality
A test takes time. You need to run it long enough to account for day-of-week patterns, seasonal shifts, and other noise.
4 weeks is minimum. 8 weeks is better.
If your business is highly seasonal (e-commerce before holidays), you might need to run the test during non-peak times—which doesn’t reflect reality.
4. Multi-touch Attribution Mess
You’re trying to isolate one channel’s impact, but customers interact with multiple channels.
If your control group can still see your organic search ads (just not Facebook), the difference between treatment and control becomes muddied.
This is why we recommend running incrementality tests on your largest, most expensive channels first.
When Incrementality Tests Make Sense
You should run an incrementality test if:
1. This Channel Represents 20%+ of Your Ad Spend If you’re spending $20k/month on Facebook ads, running a $2-5k test to validate they actually work is wise.
2. The Results Will Change Your Decision If you’re considering cutting the channel or doubling down, the truth matters.
If you’re just maintaining the status quo, the extra precision isn’t worth the cost.
3. You Have Enough Sample Size Do you have enough customers in your target market to split into treatment and control? If you only acquire 50 customers a month, you can’t run a valid test.
4. You Can Afford the Opportunity Cost Running a test means not optimizing for 4-8 weeks. Can your business tolerate that?
Running an Incrementality Test: Step by Step
If you decide to run a test, here’s how we do it:
Week 1: Plan
- Define your hypothesis: “Facebook ads drive X% incremental conversions”
- Calculate sample size needed
- Choose test duration (4-8 weeks)
- Choose metric (first-time purchase? repeat purchase? LTV?)
Week 2: Setup
- Create two audience segments: treatment (gets ads) and control (no ads)
- Use Facebook’s “Conversion Lift” study tool (they have built-in incrementality testing now, ironically)
- Or manually create control via audience exclusion (exclude control from all retargeting)
Week 3-10: Run the test
- Treatment group: Show ads normally
- Control group: Exclude from all ads (but track them in backend)
- Monitor weekly to ensure segments stay separate
Week 11: Analyze
- Compare conversion rates: treatment vs. control
- Calculate incremental lift: (treatment conversion rate - control conversion rate) / control conversion rate
- Calculate true ROAS: (incremental revenue - ad spend) / ad spend
Week 12: Decide
- If incrementality is 50%+: Keep the channel, optimize it
- If incrementality is 20-50%: Channel works but not as well as claimed. Manage expectations
- If incrementality is <20%: Channel is low ROI. Consider shifting budget
A Real Example
Let me show you a real incrementality test result.
Company: E-commerce subscription box service Channel: Google Shopping Ads (paid search) Monthly budget: $30,000 Claimed ROAS: 3.5x
Test Results:
- Treatment group (showed ads): 2,000 conversions out of 100,000 exposed = 2% conversion rate
- Control group (no ads): 800 conversions out of 100,000 = 0.8% conversion rate
- Incremental lift: (2% - 0.8%) / 0.8% = 150% or 1,200 incremental conversions
Revenue impact:
- Incremental conversions: 1,200
- Average order value: $60
- Incremental revenue: $72,000
- Ad spend: $30,000
- True ROAS: 2.4x (vs. claimed 3.5x)
The channel still works (positive ROI), but it’s 30% less effective than Google claimed.
Decision: Keep the channel, but reduce budget from $30k to $20k/month and invest the savings in lower-CAC channels (organic search, email).
Why This Matters
Here’s what would have happened without the test:
You’d trust Google’s 3.5x ROAS number. You’d scale spend to $50k/month. You’d expect $175k in incremental revenue.
You’d actually get $120k in incremental revenue (2.4x true ROAS).
You’d be surprised by the $55k shortfall. You’d scramble to find the “missing” revenue. You’d either cut the channel or blame your team.
By running the test, you made an informed decision instead of guessing.
The Limitation: Incrementality Doesn’t Account for Ecosystem Effects
One more thing to understand: Incrementality tests measure isolated impact.
They don’t capture the fact that multiple channels work together.
Example:
- You run Facebook ads (someone sees it, doesn’t click)
- They search for your brand on Google (they click the paid search ad)
- They convert
Which channel gets credit? In an incrementality test, we’d say Google Search is incremental. Facebook isn’t.
But what if Facebook created the awareness that caused the Google search?
A true ecosystem analysis would say: “Facebook laid the groundwork, Google closed the deal. Both contributed.”
Incrementality tests can’t capture that. They’re useful for understanding direct causation, but they miss indirect effects.
For most businesses, we recommend: Run incrementality tests on your top 2-3 channels. For everything else, trust your attribution model.
The Takeaway
Stop trusting ad platforms to tell you if they work.
Some of your conversions are incremental. Some are baseline (would have happened anyway).
If you’re spending $100k+ on a channel, spend $5k to find out the truth.
An incrementality test tells you if your marketing actually causes revenue, or if you’re just tagging along for the ride.
We help you design and run these tests. We analyze the results. We make recommendations.
It’s unsexy work. But it’s the difference between a marketing strategy that actually works and one that just looks good in a dashboard.