The breakeven NPS chart showing how the curve shifts dramatically when the negativity bias multiplier (γ) is applied. Standard NPS breakeven at 0 vs. adjusted breakeven at 64.
NPS WOM Research
I proved that most companies' NPS scores are lying to them — with math, psychology, and an argument with AI.
My Role
Independent researcher. Hypothesis formation, quantitative modeling, behavioral science integration, multi-AI workflow orchestration.
Research
Type
Sole researcher
Role
$6.4M
Value destruction found
γ=3
Negativity bias
“During a practice case interview, I told Claude that an NPS of 31 was catastrophic. Claude said I was wrong — that 31 is solid for retail. I built the math to settle the argument. Claude revised my score.”
Key Design Decisions
The moments that shaped the product.
Split screen: Rob's initial assertion ('NPS 31 is catastrophic') vs. Claude's response ('31 is solid for retail') vs. the final model output showing breakeven NPS of 64. The argument that started the research.
Argued with AI and built the math to settle it
The starting point was a disagreement with Claude about what an NPS score actually means economically. Rather than deferring to the AI or ignoring it, I built a quantitative framework that incorporated behavioral science (Kahneman, Baumeister, Gottman) to show that the standard NPS formula has a hidden assumption: that one promoter cancels one detractor. They don't.
Chart showing breakeven NPS at γ=1 (standard, breakeven=21), γ=3 (breakeven=64), γ=5 (breakeven=77). The curve that shifts the entire analysis.
The negativity bias multiplier nobody was accounting for
Without the negativity bias multiplier (γ), breakeven NPS is 21 — manageable. With γ=3, breakeven jumps to 64. With γ=5, it's 77. The psychology is the most robust finding in the model, and it's the variable everyone else ignores. An original quantitative contribution to CX methodology.
Diagram showing the three-party research workflow: Rob (domain expert, thesis direction) → Claude (analysis, model building) → Gemini (critique, gap identification) → back to Rob for arbitration.
Multi-AI collaboration as research methodology
Claude was the analytical engine and skeptical reviewer. Gemini was the fresh-eyes critic who caught two gaps: linear vs. exponential spread, and silent churn. I brought the domain expertise and the insistence that conventional wisdom was wrong. The most valuable moments were when the AIs disagreed with each other.
Process
Told Claude NPS 31 was catastrophic. Claude disagreed. Built the math.
Introduced negativity bias multiplier (γ) from Kahneman/Baumeister.
Gemini caught two gaps. Claude arbitrated. Model improved.
Excel model, research paper, Medium article, LinkedIn post.
What Shipped
$6.4M
Value destruction
64
Breakeven NPS (γ=3)
$21.8M
Total hidden cost
8.5/10
Case interview score
Breakeven NPS of 64 (vs. industry assumption of 0). $6.4M annual value destruction quantified for a typical 100K-customer company. Framework applied in subsequent case interview scoring 8.5/10.
- $6.4M in value destruction quantified through the breakeven model
- Breakeven NPS shifts from 0 to 64 when negativity bias is applied
- $21.8M total hidden NPS cost including silent churn model
- Case interview score revised from 6.5 to 7.5; subsequent case scored 8.5/10
What I Learned
The AI was excellent at computation and research synthesis. But the direction — the insistence that something was missing, the intuition that benchmarks were ignoring behavioral science — came from domain experience. The most productive moments in AI-augmented research aren't when the AI agrees with you. They're when you disagree, and resolving the disagreement produces sharper thinking.
Signals for Recruiters
More Work
All projects →