In the previous article, we introduced the concept and methodology framework of causal inference. Today, we'll dive deep into the most commonly used causal inference method in marketing — Difference-in-Differences (DID).
What is Difference-in-Differences?
The core idea of DID is: compare the difference in changes between the "treatment group" and "control group" before and after intervention to estimate causal effects.
Simply put:
Causal Effect = (Treatment Group After - Treatment Group Before) - (Control Group After - Control Group Before)
Why Do We Need a Control Group?
Directly comparing before/after differences is inaccurate because:
- Natural Trends: Sales may change over time even without intervention
- Seasonal Factors: Holiday and promotion season impacts
- External Events: Competitor actions, policy changes, etc.
DID introduces a control group to eliminate these confounding factors.
Marketing Application Scenarios
Scenario 1: Evaluating Promotion Effectiveness
An e-commerce platform wants to evaluate the effect of a "discount campaign":
- Treatment Group: Users who participated in the discount
- Control Group: Users who didn't participate
- Intervention: Discount campaign
Through DID analysis, you can get: the true incremental effect of the discount, excluding natural growth and seasonal factors.
Scenario 2: Evaluating Ad Effectiveness
A brand wants to evaluate the effect of a new product advertisement:
- Treatment Group: Users who saw the ad
- Control Group: Users who didn't see the ad
- Intervention: Ad campaign
Scenario 3: Evaluating New Feature Launch
An App launched a new feature and wants to evaluate its impact on user retention:
- Treatment Group: Users who used the new feature
- Control Group: Users who didn't use it
- Intervention: New feature launch
DID Assumptions
1. Parallel Trends Assumption
This is the most important assumption for DID: In the absence of intervention, the treatment and control groups should have parallel trends.
2. No Spillover Effects
The treatment effect doesn't affect the control group.
3. Stable Unit Treatment Value Assumption (SUTVA)
Intervention effects are stable across time periods.
Practical Case
Case: Promotion Effectiveness Evaluation for a Retail Brand
Data Collection:
- Collected 12 months of historical sales data
- Member day campaign launched in June
- Regular weekends without member days as control
Analysis Results:
- Simple comparison: Sales increased 45% on member day
- DID analysis: After excluding natural growth and seasonal factors, true incremental effect is 28%
- Conclusion: Member day campaign is effective, but the effect is not as high as the surface numbers suggest
Conclusion
Difference-in-Differences (DID) is the core method for marketing effectiveness evaluation. It helps us:
- Exclude interference from natural trends and seasonal factors
- Accurately measure the true incremental effect of marketing campaigns
- Optimize marketing budget allocation
Master DID for more accurate and professional marketing effectiveness evaluation.
Next: We'll cover Instrumental Variables (IV) to help solve endogeneity problems in marketing. Stay tuned!

