JS Guide
HomeQuestionsTopicsCompaniesResources
BookmarksSearch

Built for developers preparing for JavaScript, React & TypeScript interviews.

ResourcesQuestionsSupport
HomeQuestionsSearchProgress
HomeTopicsperformancePerformance Monitoring & Budgets
PrevNext
performance
advanced
25 min read

Performance Monitoring & Budgets

alerting
analytics
budgets
calibre
crux
lighthouse-ci
monitoring
percentiles
rum
speedcurve
synthetic
vercel-analytics
web-vitals
webpagetest

Production performance monitoring combines synthetic testing (Lighthouse CI) with Real User Monitoring (RUM) to track metrics over time, enforce performance budgets, and alert teams when regressions occur.

Key Points

1Synthetic Monitoring

Automated Lighthouse CI tests in CI/CD pipelines catch performance regressions before deployment with consistent, reproducible measurements.

2Real User Monitoring

RUM captures actual user performance data segmented by geography, device, and network, revealing issues invisible to lab tests.

3Performance Budgets

Maximum thresholds for bundle size, LCP, INP, and page weight that are enforced automatically in CI to prevent gradual regressions.

4Percentile-Based Alerting

Alerts should use p75 or p95 percentiles rather than averages, because averages hide the poor experience of your slowest users.

What You'll Learn

  • Configure Lighthouse CI in a CI/CD pipeline with performance score thresholds
  • Set up Real User Monitoring using the web-vitals library and an analytics service
  • Define and enforce performance budgets for bundle size and Core Web Vitals
  • Design an alerting strategy using percentile-based thresholds for production monitoring

Deep Dive

Building a fast website is only half the battle — keeping it fast requires continuous monitoring. Without performance monitoring, regressions creep in gradually: a new dependency adds 50KB here, an unoptimized image slips through there, and within months the site is significantly slower. Production monitoring catches these regressions before they impact users.

Synthetic Monitoring

Synthetic monitoring runs automated tests against your site from controlled environments at regular intervals. The most common tool is Lighthouse CI, which runs Lighthouse audits in your CI/CD pipeline and fails the build if scores drop below thresholds:

YAML
# lighthouserc.json
{
  "assertions": {
    "categories:performance": ["error", { "minScore": 0.9 }],
    "largest-contentful-paint": ["error", { "maxNumericValue": 2500 }],
    "cumulative-layout-shift": ["error", { "maxNumericValue": 0.1 }]
  }
}

Synthetic monitoring is consistent and reproducible — tests run on the same hardware with the same network, making it easy to detect regressions. However, it does not represent real user conditions, which vary enormously across devices and networks.

Other synthetic tools include WebPageTest (detailed waterfall analysis, filmstrip view, multi-location testing), SpeedCurve (trend tracking and competitive benchmarking), and Calibre (automated Lighthouse monitoring with Slack alerts).

Real User Monitoring (RUM)

RUM collects performance data from actual user sessions using browser APIs like the Performance Observer, Navigation Timing, and Resource Timing APIs. Google's web-vitals library simplifies capturing Core Web Vitals, which you then send to your analytics service.

RUM reveals patterns invisible to synthetic tests: how performance varies by geography, device type, network condition, and browser. A site may score 95 on Lighthouse but perform poorly for users on 3G networks in Southeast Asia. RUM data segmented by these dimensions reveals the full picture.

Popular RUM services include Vercel Analytics (built-in for Next.js), Datadog RUM, New Relic Browser, Sentry Performance, and Google's Chrome User Experience Report (CrUX). You can also build custom RUM by sending web-vitals data to your own analytics endpoint.

Performance Budgets

A performance budget sets maximum thresholds for metrics that the team agrees not to exceed. Common budgets include:

  • Total JavaScript bundle size (e.g., max 200KB gzipped)
  • LCP under 2.5 seconds
  • INP under 200 milliseconds
  • Total page weight under 1MB
  • Maximum number of HTTP requests

Bundlers can enforce size budgets: Webpack's performance.maxAssetSize, Next.js's experimental.outputFileTracingExcludes, and custom CI checks. The key is making budgets visible and automated — a Slack notification when a PR exceeds the budget is far more effective than a document nobody reads.

Alerting and Regression Detection

Effective monitoring requires alerting. Configure alerts for sustained metric degradation (not single spikes), such as: "Alert if p75 LCP exceeds 3 seconds for more than 1 hour." Use percentiles (p75, p95) rather than averages, because averages hide the experience of your slowest users. A p75 of 2 seconds means 25% of users experience 2+ seconds — those are the users most likely to leave.

Building a Monitoring Strategy

A comprehensive approach uses both synthetic and RUM monitoring. Synthetic monitoring in CI catches regressions before deployment. RUM monitoring in production catches issues that only appear under real-world conditions. Performance budgets set the guardrails. Alerting ensures the team responds quickly to regressions.

Key Interview Distinction

Synthetic monitoring measures performance in controlled lab conditions and is best for catching regressions in CI. RUM measures actual user experiences in production and reveals real-world performance patterns. Neither alone is sufficient — production applications need both.

Fun Fact

Amazon found that every 100ms of added latency cost them 1% of sales. Google discovered that a 500ms delay in search results caused a 20% drop in traffic. These studies from the late 2000s launched the entire field of web performance monitoring as a business priority.

Learn These First

Core Web Vitals & Performance Metrics

beginner

Continue Learning

Core Web Vitals & Performance Metrics

beginner

Bundling & Code Splitting

intermediate

Caching Strategies

advanced

Practice What You Learned

How do you set up performance monitoring for production applications?
senior
monitoring
Use Real User Monitoring (RUM) to collect Web Vitals from actual users. Tools like Google Analytics, Datadog, or custom solutions track metrics over time. Set up alerting for regressions, segment data by device/geography, and correlate with business metrics.
Previous
Core Web Vitals & Performance Metrics
Next
Debouncing & Throttling
PrevNext