Designing a comprehensive testing strategy using the testing trophy, defining what to test at each level, setting coverage targets, integrating tests into CI/CD, and making pragmatic trade-offs for large-scale applications.
Static analysis at the base (free bug catching), unit tests for isolated logic, integration tests as the largest layer (highest ROI), and E2E tests only for critical user journeys.
Set coverage by criticality: 90%+ for shared libraries, 80%+ branch coverage for business logic, 70%+ for UI components. Exclude configuration and boilerplate.
Order tests by speed: static analysis first (seconds), then unit tests, then integration tests, then E2E. Fail fast to avoid running expensive tests on broken code.
Integration tests provide the most confidence per test. Write regression tests for bug fixes. Add tests to legacy code incrementally as you touch it rather than retroactively.
A testing strategy defines what types of tests to write, where to focus effort, and how tests integrate into the development workflow. For large applications, having a deliberate strategy prevents both under-testing (shipping bugs) and over-testing (slow feedback loops). This is a senior-level interview topic that tests architectural thinking.
Kent C. Dodds's testing trophy recommends this distribution of testing effort:
Static Analysis (Base): TypeScript, ESLint, and Prettier catch entire categories of bugs at zero runtime cost. Type errors, unused variables, missing imports, and formatting issues are caught before tests even run. This is the highest-ROI testing layer.
Unit Tests (Small): Test isolated business logic, pure functions, utilities, and algorithms. These are fast and deterministic. Focus on edge cases and boundary conditions that types cannot catch.
Integration Tests (Largest Layer): The majority of your tests should be here. Render components with their dependencies, test user flows within a page, and use MSW for API mocking. Integration tests catch real bugs at component boundaries.
E2E Tests (Top): Reserve for critical paths only -- authentication, checkout, onboarding. These are slow and occasionally flaky but verify the entire stack works together.
Static analysis: Type correctness, linting rules, import/export consistency Unit tests: Tax calculations, sorting algorithms, validation functions, date formatting, state machine transitions Integration tests: Form submission flows, component interactions, navigation behavior, data display after API calls, error state rendering E2E tests: Complete user journeys (sign up through first action), payment flows, multi-step wizards
Define clear boundaries for what constitutes a test at each level:
Set differentiated coverage targets by code criticality:
Use a ratcheting approach: CI enforces that coverage never drops below the current level. This naturally increases coverage over time.
Structure your CI pipeline for fast feedback:
Fail fast: if static analysis fails, do not run expensive test suites.
Flaky tests (tests that sometimes pass and sometimes fail) destroy trust in the test suite:
Tests alone cannot catch everything. Complement your test suite with:
Fun Fact
Google has over 4 billion lines of code and runs 150 million test cases daily. Their internal research found that the most effective predictor of code quality was not test coverage percentage but rather how quickly developers received test feedback -- leading them to heavily invest in fast, parallelized test infrastructure.