JS Guide
HomeQuestionsTopicsCompaniesResources
BookmarksSearch

Built for developers preparing for JavaScript, React & TypeScript interviews.

ResourcesQuestionsSupport
HomeQuestionsSearchProgress
HomeTopicstestingTesting Strategy for Large Applications
PrevNext
testing
advanced
18 min read

Testing Strategy for Large Applications

ci-cd
coverage
flaky-tests
test-pyramid
testing-strategy
testing-trophy

Designing a comprehensive testing strategy using the testing trophy, defining what to test at each level, setting coverage targets, integrating tests into CI/CD, and making pragmatic trade-offs for large-scale applications.

Key Points

1Testing Trophy Distribution

Static analysis at the base (free bug catching), unit tests for isolated logic, integration tests as the largest layer (highest ROI), and E2E tests only for critical user journeys.

2Differentiated Coverage Targets

Set coverage by criticality: 90%+ for shared libraries, 80%+ branch coverage for business logic, 70%+ for UI components. Exclude configuration and boilerplate.

3CI Pipeline Structure

Order tests by speed: static analysis first (seconds), then unit tests, then integration tests, then E2E. Fail fast to avoid running expensive tests on broken code.

4Pragmatic Trade-offs

Integration tests provide the most confidence per test. Write regression tests for bug fixes. Add tests to legacy code incrementally as you touch it rather than retroactively.

What You'll Learn

  • Design a testing strategy using the testing trophy with clear boundaries at each level
  • Set differentiated coverage targets based on code criticality and risk
  • Structure CI pipelines for fast feedback with appropriate test ordering
  • Make pragmatic testing decisions under time pressure and for legacy codebases

Deep Dive

A testing strategy defines what types of tests to write, where to focus effort, and how tests integrate into the development workflow. For large applications, having a deliberate strategy prevents both under-testing (shipping bugs) and over-testing (slow feedback loops). This is a senior-level interview topic that tests architectural thinking.

The Testing Trophy

Kent C. Dodds's testing trophy recommends this distribution of testing effort:

  1. Static Analysis (Base): TypeScript, ESLint, and Prettier catch entire categories of bugs at zero runtime cost. Type errors, unused variables, missing imports, and formatting issues are caught before tests even run. This is the highest-ROI testing layer.

  2. Unit Tests (Small): Test isolated business logic, pure functions, utilities, and algorithms. These are fast and deterministic. Focus on edge cases and boundary conditions that types cannot catch.

  3. Integration Tests (Largest Layer): The majority of your tests should be here. Render components with their dependencies, test user flows within a page, and use MSW for API mocking. Integration tests catch real bugs at component boundaries.

  4. E2E Tests (Top): Reserve for critical paths only -- authentication, checkout, onboarding. These are slow and occasionally flaky but verify the entire stack works together.

What to Test at Each Level

Static analysis: Type correctness, linting rules, import/export consistency Unit tests: Tax calculations, sorting algorithms, validation functions, date formatting, state machine transitions Integration tests: Form submission flows, component interactions, navigation behavior, data display after API calls, error state rendering E2E tests: Complete user journeys (sign up through first action), payment flows, multi-step wizards

Test Boundaries

Define clear boundaries for what constitutes a test at each level:

  • Unit: Single function/module, all dependencies mocked
  • Integration: Feature or page scope, real child components, mocked network/external services
  • E2E: Full application, real browser, real (or seeded) backend

Coverage Strategy

Set differentiated coverage targets by code criticality:

  • Shared libraries/utilities: 90%+ coverage (used everywhere, must be reliable)
  • Business logic: 80%+ branch coverage (where most bugs hide)
  • UI components: 70%+ with focus on interaction tests
  • Configuration/boilerplate: Exclude from coverage metrics

Use a ratcheting approach: CI enforces that coverage never drops below the current level. This naturally increases coverage over time.

CI/CD Integration

Structure your CI pipeline for fast feedback:

  1. Static analysis (seconds): TypeScript check, ESLint, Prettier
  2. Unit tests (seconds to minutes): Fast, parallelized
  3. Integration tests (minutes): Run in parallel, potentially with test sharding
  4. E2E tests (5-15 minutes): Run on staging deployment, can be on a separate pipeline

Fail fast: if static analysis fails, do not run expensive test suites.

Pragmatic Trade-offs

  • New features: Write integration tests first, add unit tests for complex logic
  • Bug fixes: Write a regression test that reproduces the bug before fixing
  • Legacy code: Add tests as you touch code (the Boy Scout rule), do not try to achieve 100% coverage retroactively
  • Time pressure: Integration tests give the most confidence per test written

Flaky Test Management

Flaky tests (tests that sometimes pass and sometimes fail) destroy trust in the test suite:

  • Quarantine flaky tests: Move them to a separate suite that does not block CI
  • Track flaky test frequency and prioritize fixing them
  • Most flakiness comes from timing issues, shared state, or external dependencies
  • Use retry mechanisms judiciously (they mask underlying problems)

Monitoring in Production

Tests alone cannot catch everything. Complement your test suite with:

  • Error tracking (Sentry, Bugsnag)
  • Real User Monitoring (RUM) for Web Vitals
  • Feature flags for gradual rollouts
  • Canary deployments for early problem detection

Fun Fact

Google has over 4 billion lines of code and runs 150 million test cases daily. Their internal research found that the most effective predictor of code quality was not test coverage percentage but rather how quickly developers received test feedback -- leading them to heavily invest in fast, parallelized test infrastructure.

Learn These First

Integration Tests vs Unit Tests

intermediate

Code Coverage Metrics

intermediate

Continue Learning

End-to-End Testing

advanced

TDD vs BDD Methodologies

advanced

Performance Testing and Budgets

advanced

Practice What You Learned

How do you design a testing strategy for a large application?
senior
strategy
A testing strategy defines what to test, how much, and when. Use the testing trophy/pyramid as a guide: prioritize integration tests, supplement with unit tests for complex logic, and use E2E for critical paths. Consider cost, speed, and confidence.
Previous
Test Lifecycle Hooks
Next
Test Organization and Structure
PrevNext