Comparisons

CSAT vs NPS vs CES — which one do you need?

Three metrics, three different questions, three different decisions. Here's what each one actually tells you, when to use which, and why running all three without a plan usually produces less insight, not more.

7 min read·Updated April 16, 2026

Key takeaways

  • CSAT = 'was this interaction good?'. Per-ticket, per-delivery, per-call.
  • NPS = 'would you recommend us overall?'. Quarterly, company-wide.
  • CES = 'was this easy?'. After self-service, signup, or onboarding.
  • Mature teams run CSAT + NPS. CES is situational.
  • Don't average different metrics — they measure different things.

At a glance

  • CSAT (Customer Satisfaction): 'How satisfied were you with X?' on a 1–5 scale. Per-interaction. Fast feedback loop.
  • NPS (Net Promoter Score): 'How likely are you to recommend us?' on a 0–10 scale. Overall loyalty. Quarterly cadence.
  • CES (Customer Effort Score): 'How easy was it to X?' on a 1–7 or 1–5 scale. After self-service or onboarding flows.

CSAT: did today's customer have a good day?

CSAT measures a single interaction — a support ticket, a delivery, an onboarding call. The question is narrow, the scale is small, the response takes five seconds.

Use CSAT when you want granular signal you can attribute to a specific moment. You can slice it by agent, channel, shift, product — and spot problems before they become NPS dips. CSAT's weakness: it only measures the interactions you ask about, so selection matters.

NPS: how's the whole relationship going?

NPS is the loyalty question. It's deliberately vague — 'would you recommend us' integrates across every interaction the customer has ever had with you. That vagueness is what makes it comparable across companies and industries.

Use NPS when you want a single number that captures overall health over time. It's lagging and slow-moving — a regression today shows up in NPS in weeks, not hours. Don't try to use NPS to debug a single feature launch; use CSAT or in-product metrics for that.

CES: did the customer have to work too hard?

CES measures perceived effort, which turns out to correlate more tightly with churn than satisfaction does. A customer who enjoyed your support but had to re-explain themselves three times is a flight risk.

Use CES after self-service interactions (did the help article solve it?), onboarding flows (was it easy to get started?), or resolution-heavy tickets. Don't use CES for every interaction — it becomes noise when effort isn't the relevant axis.

Which one should you actually run?

Start with CSAT. It's the fastest feedback loop and the easiest to act on. Add NPS once you have a stable product and want a trend line. Add CES only when you have a specific effort-heavy flow you're trying to improve.

Running all three is fine, but only if each one has an owner and a decision it feeds. A dashboard with three unused metrics is less useful than one metric someone acts on every week.

Ready to try it

Customer satisfaction (CSAT) template

A one-tap CSAT score you can send after any support ticket or purchase.

Frequently asked questions

Can I average CSAT, NPS, and CES into one 'health' score?

Technically yes, meaningfully no. The three metrics measure different things on different scales — averaging them produces a number that reacts to noise, not signal. Track them separately and look at the trend of each.

Which metric has the best academic backing?

All three have been challenged in peer-reviewed research. NPS is probably the most-debated — critics argue the single-question design overfits to its own history. In practice, teams use NPS because it's comparable, not because it's statistically pristine.

What's the minimum sample size for these to be meaningful?

Rule of thumb: at least 100 responses per reporting period for NPS, 30+ for CSAT per segment, 50+ for CES per flow. Below these numbers, a single response can swing the metric more than a real product change.

What if my response rate is low?

Low response rate ≠ broken metric, but it does mean you're over-sampling the extremes (very happy and very unhappy respond; the middle doesn't). Calibrate by reading comments alongside the number and by varying your channels.