What sales-lift studies actually measure.
The most-requested measurement and the least-understood. Useful as one input. Dangerous as the whole framework.
Brands ask for sales-lift studies because they’re directional, comparable, and a known artifact. They’ve been the industry’s “ROI proof” for over a decade. Every measurement vendor offers one. Every plan deck closes with one.
Used correctly, they’re useful. Used as the framework — which is how most plans use them — they’re misleading.
What they measure
A sales-lift study measures the difference in conversion (sales, store visits, web traffic) between an audience that was exposed to the campaign and an audience that wasn’t — attributed via panel data or first-party identity match.
The 2026 versions have improved meaningfully: bigger panels, higher match rates, cleaner control groups. iSpot, Innovid, Nielsen, and Comscore each run validated study programs. The methodology has tightened.
What they don’t measure
- The counterfactual. A sales-lift study compares exposed vs. unexposed. It does not compare the campaign you ran vs. a different campaign you might have run.
- The driver. The lift number is attributed to the campaign as a whole. It can’t tell you whether the lift came from the creative, the audience, the media mix, or the timing.
- Beyond the measurement window. Most studies run 30 to 90 days. Brand effects that compound past that window are invisible to the methodology.
- Negative outcomes. A 0% lift can mean the campaign didn’t work — or it can mean the control group leaked exposure. The study can’t tell you which.
The read
What sales-lift studies are good for: comparing media plans against themselves, quarter over quarter. Confirming a hypothesis about a target audience. Building executive narrative.
What they’re not good for: choosing between two creative executions. Sizing the optimal media mix. Attributing incremental value past the measurement window.
“A sales-lift study is a thermometer, not a diagnosis.”
A senior practitioner uses sales-lift studies as one input in a measurement framework — alongside MMM, brand-tracking, and incrementality testing. Used alone, they create the illusion that media has been measured. The illusion is more dangerous than the absence of the study.
A working note. If you want to look at your measurement framework end-to-end — direct line at hello@odellnco.com.