At a glance
- CSAT (Customer Satisfaction): 'How satisfied were you with X?' on a 1–5 scale. Per-interaction. Fast feedback loop.
- NPS (Net Promoter Score): 'How likely are you to recommend us?' on a 0–10 scale. Overall loyalty. Quarterly cadence.
- CES (Customer Effort Score): 'How easy was it to X?' on a 1–7 or 1–5 scale. After self-service or onboarding flows.
CSAT: did today's customer have a good day?
CSAT measures a single interaction — a support ticket, a delivery, an onboarding call. The question is narrow, the scale is small, the response takes five seconds.
Use CSAT when you want granular signal you can attribute to a specific moment. You can slice it by agent, channel, shift, product — and spot problems before they become NPS dips. CSAT's weakness: it only measures the interactions you ask about, so selection matters.
NPS: how's the whole relationship going?
NPS is the loyalty question. It's deliberately vague — 'would you recommend us' integrates across every interaction the customer has ever had with you. That vagueness is what makes it comparable across companies and industries.
Use NPS when you want a single number that captures overall health over time. It's lagging and slow-moving — a regression today shows up in NPS in weeks, not hours. Don't try to use NPS to debug a single feature launch; use CSAT or in-product metrics for that.
CES: did the customer have to work too hard?
CES measures perceived effort, which turns out to correlate more tightly with churn than satisfaction does. A customer who enjoyed your support but had to re-explain themselves three times is a flight risk.
Use CES after self-service interactions (did the help article solve it?), onboarding flows (was it easy to get started?), or resolution-heavy tickets. Don't use CES for every interaction — it becomes noise when effort isn't the relevant axis.
Which one should you actually run?
Start with CSAT. It's the fastest feedback loop and the easiest to act on. Add NPS once you have a stable product and want a trend line. Add CES only when you have a specific effort-heavy flow you're trying to improve.
Running all three is fine, but only if each one has an owner and a decision it feeds. A dashboard with three unused metrics is less useful than one metric someone acts on every week.