Incrementality vs Attribution: The Film Room vs Scoreboard Problem
Quick Answer: incrementality vs attribution
Attribution tracks the customer journey to assign conversion credit across touchpoints, while incrementality measures the actual lift generated by marketing efforts through controlled experiments. Attribution excels at tactical optimization within channels but suffers from blind spots around cross-device behavior and long consideration cycles. Incrementality provides scientific validation of true marketing impact by comparing test groups to control groups, but requires significant time and resources. According to marketing measurement research, both serve different purposes: attribution for fast optimization decisions, incrementality for validating strategy and budget allocation. The most effective approach combines both tools rather than choosing one.
| Dimension | Incrementality Testing | Attribution Modeling |
|---|---|---|
| Core Question | What would have happened if we didn't run this campaign? | Which touchpoints contributed to this conversion? |
| Methodology | Controlled experiments with test and control groups | Journey tracking and credit assignment models |
| Time Frame | Weeks to months for meaningful results | Real-time to daily reporting |
| Blind Spots | Limited to testable scenarios, expensive to run frequently | Cross-device behavior, view-through impact, offline influence |
| Best Use Case | Validating channel effectiveness and budget allocation | Tactical optimization within campaigns |
| Statistical Validity | High confidence intervals when properly designed | Directional insights, not causal proof |
| Operational Speed | Slow decision-making due to test duration | Fast optimization and daily adjustments |
| Cross-Channel View | Measures total ecosystem lift including offline | Limited to trackable digital touchpoints |
The Film Room vs Scoreboard Problem
As Patrick Gilbert argues in Never Always, Never Never, modern marketing has confused measurement tools with evaluation systems. Attribution and incrementality aren't competing methodologies—they're different tools being misused for the wrong jobs. Attribution functions like a box score, showing you who touched the ball last. Incrementality works like game film, revealing what actually changed the outcome. The dysfunction in marketing measurement comes from treating film room tools like final scoreboards.
This confusion has created perverse incentives across the industry. When agencies survive based on attributable ROAS, their work tilts toward whatever is easiest to measure and claim credit for. The result is marketing that looks precise on dashboards but spins helplessly in practice. The most effective organizations understand that measurement exists to improve decision-making, not to assign credit or blame.
How Attribution Really Works
Attribution modeling assigns conversion credit across the digital touchpoints that preceded a purchase: clicks, impressions, site visits, emails, retargeting ads. It operates at the individual customer level, tracking behavior across sessions and devices to map the journey from awareness to conversion.
The strength of attribution lies in its operational utility. It helps marketers optimize within existing systems—which creative, which audience, which keyword, which placement. Attribution data flows quickly, enabling daily or hourly adjustments to campaign tactics. For performance marketers managing hundreds of ad groups across multiple platforms, attribution provides the fast feedback loops necessary for tactical optimization.
But attribution has systematic blind spots that compound when it becomes the primary measurement framework. It struggles with cross-device behavior, long consideration cycles, and anything that happens offline. More fundamentally, attribution is outcome-oriented rather than causal. It tells you what touchpoints were present before a conversion, not what caused the conversion to happen.
Attribution measures correlation, not causation. It shows who was in the room when something happened, not who made it happen.
The Science of Incrementality
Incrementality testing answers the question attribution cannot: "What would have happened if we didn't do this?" It holds out a control group that doesn't see the marketing, compares it to a group that does, and measures the difference. This is the closest marketing gets to true scientific experimentation.
The power of incrementality lies in its ability to separate marketing-driven behavior from what would have happened anyway. According to research from the Ehrenberg-Bass Institute, many conversions attributed to marketing would have occurred regardless—customers were already planning to purchase. Incrementality testing identifies the true lift generated by marketing efforts.
Incrementality is powerful, but it is probabilistic, not definitive. Results are influenced by timing, creative, market conditions, and randomness.
Patrick Gilbert, Never Always, Never Never
However, incrementality testing requires significant resources and patience. Meaningful tests take weeks or months to generate statistically significant results. They're expensive to design and execute properly, making it impractical to run them constantly or at granular levels. Many marketers run single tests and treat the results as definitive truth, ignoring that the same test run under different conditions could yield materially different outcomes.
The Bills' Roster Construction Model
Gilbert uses the Buffalo Bills' roster construction as an analogy for marketing measurement. NFL general managers can't evaluate players solely by box scores—who scored touchdowns. They need to understand how each player contributes to the system's overall performance, even players who never appear in the stat sheet.
In 2024, Bills running back James Cook scored 16 rushing touchdowns but represented just 1.2% of the team's salary cap. Meanwhile, three offensive linemen with zero touchdowns each received larger cap allocations. Attribution would suggest Cook was the most valuable offensive player. Film study reveals the offensive line created the conditions that made Cook's success possible.
- Box Score Thinking: James Cook gets credit for 16 touchdowns
- Film Room Analysis: Offensive linemen created space and protection
- Smart Allocation: Higher investment in the system enablers
- Long-term Strategy: Building sustainable competitive advantage
Marketing works the same way. Attribution overvalues bottom-funnel tactics like Google Search and retargeting because they're easiest to track. Incrementality testing can validate whether upper-funnel brand campaigns create the conditions that make bottom-funnel conversion more efficient. The most effective approach combines both perspectives rather than choosing sides.
When Each Tool Excels
Attribution excels in environments requiring fast tactical optimization. E-commerce brands running hundreds of product campaigns across multiple platforms need daily performance feedback. Attribution helps identify which ads, audiences, and keywords drive immediate response. It's the measurement equivalent of real-time coaching adjustments during a game.
Incrementality testing proves most valuable for strategic decisions and budget allocation. When evaluating whether to launch on a new platform or double investment in video creative, incrementality provides the scientific validation needed for confident decision-making. It helps marketing leaders separate effective channels from correlated channels.
Don't build your marketing strategy around box scores. Build it the way Brandon Beane builds a roster—with context, balance, and long-term thinking.
The dysfunction emerges when organizations use these tools for purposes they weren't designed for. Treating attribution like a final verdict on campaign effectiveness creates the illusion of precision while missing the bigger picture. Using incrementality results to make daily optimization decisions ignores the tool's inherent latency and statistical variability.
Building a Measurement System
The most sophisticated marketing organizations treat measurement as a system, not a single tool. They use Media Mix Modeling for budget allocation, incrementality for strategic validation, and attribution for tactical optimization. Each tool makes the others better when used in proper sequence and for appropriate purposes.
This requires separating two distinct activities: evaluating campaigns versus evaluating teams. Campaign evaluation belongs in the film room—collaborative analysis focused on learning and improvement. Team evaluation happens at the business level: revenue growth, profit, market share, and the organization's ability to sustain momentum over time. The tragedy of modern marketing measurement is confusing these two activities.
Frequently Asked Questions
Can incrementality testing replace attribution modeling?
No, they serve different purposes. Incrementality provides strategic validation but is too slow for daily optimization decisions. Attribution enables fast tactical adjustments within campaigns. The most effective approach combines both tools rather than choosing one.
Why does attribution favor bottom-funnel channels like Google Search?
Attribution tracks the final touchpoints before conversion, which tend to be bottom-funnel tactics. It struggles to measure upper-funnel brand impact, cross-device behavior, and offline influence that create conditions for conversion.
How long do incrementality tests need to run for reliable results?
Most incrementality tests require 2-8 weeks for statistical significance, depending on conversion volume and expected lift size. Single tests should be repeated under different conditions since results vary based on timing, creative, and market conditions.
Which measurement approach works better for small businesses?
Small businesses typically start with attribution for tactical optimization due to budget constraints and need for fast feedback. Incrementality testing becomes valuable as spend scales and strategic questions about channel effectiveness arise.
Do incrementality tests work for brand awareness campaigns?
Yes, but they require different metrics than direct conversion lift. Incrementality can measure lift in brand search volume, website traffic, social mentions, or survey-based brand awareness metrics to validate upper-funnel impact.
How do iOS changes affect attribution versus incrementality?
iOS privacy changes have degraded attribution accuracy by limiting cross-app tracking, making incrementality testing relatively more valuable. However, incrementality tests still work effectively since they compare aggregated group performance rather than individual user tracking.
From the Book
Chapters 20 and 21 of *Never Always, Never Never* expose how the obsession with measurement has created an industry that can explain everything in detail—except growth itself. Gilbert reveals why perfect attribution is impossible and how the illusion of control undermines marketing effectiveness.
Read more in Chapters 20-21 of Never Always, Never Never.
Want to go deeper on this topic?
Chat with the AI companion to explore these concepts with the full context of the book.
Chat about this topicRelated Reading
Why the Old Playbook Is Broken
The platform tricks and optimization shortcuts that built most digital marketing careers have been commoditized. Here's what actually changed and what to do about it.
BlogMental Availability: The Marketing Concept Most Digital Marketers Ignore
Mental availability is the single most important driver of brand growth. Most digital marketers have never heard of it. Here's why that matters.