The data layer determines how a frontend application fetches, caches, normalizes, and synchronizes data — separating client state from server state and choosing the right fetching and caching strategies is critical for maintainable, performant applications.
Client state (UI, forms, navigation) is owned by the browser; server state (fetched data) is a cached copy of backend data — mixing them in one store causes stale data bugs and manual cache management.
Return cached data instantly for fast UI, then revalidate in the background — TanStack Query, SWR, and RTK Query implement this pattern with configurable stale time and cache invalidation.
REST is simple and cacheable but over/under-fetches; GraphQL gives precise data but adds complexity; tRPC provides zero-cost type safety but couples client and server to TypeScript.
Update the UI immediately before server confirmation for latency-sensitive interactions (likes, bookmarks, reordering) — roll back on failure to maintain consistency.
A Backend for Frontend aggregates multiple API calls into frontend-optimized endpoints, reducing client waterfall requests — Next.js Server Components naturally serve as a BFF layer.
The data layer is the bridge between your backend APIs and your UI components. Poor data architecture leads to loading waterfalls, stale data bugs, excessive re-renders, and unmaintainable state management code. A well-designed data layer makes data fetching declarative, caching automatic, and synchronization reliable.
The most important architectural decision is separating client state from server state:
Client state is data owned by the frontend — it exists only in the browser and doesn't need to be fetched:
Server state is data owned by the backend — it's fetched over the network and cached locally:
Mixing these two types in a single store (a common Redux anti-pattern) causes problems: you end up writing manual loading/error/success states for every API call, cache invalidation becomes your responsibility, and stale data bugs appear everywhere.
Modern best practice: use lightweight state management for client state (useState, useReducer, Zustand, Jotai) and a server state library for server state (TanStack Query, SWR, Apollo Client, RTK Query).
The most common pattern. Each endpoint returns a specific resource:
GET /api/users/123 — fetch userGET /api/users/123/posts?page=1&limit=20 — fetch user's postsPOST /api/posts — create postPros: Simple, cacheable (HTTP caching), widely understood. Cons: Over-fetching (getting fields you don't need), under-fetching (needing multiple requests for related data).
A query language where the client specifies exactly what data it needs:
query {
user(id: "123") {
name
avatar
posts(first: 20) {
title
createdAt
}
}
}Pros: No over/under-fetching, single request for related data, strongly typed schema. Cons: Complexity, caching is harder (no URL-based HTTP caching), N+1 query risk on server.
End-to-end type safety between a TypeScript backend and frontend — no code generation needed. The server defines procedures, and the client calls them with full TypeScript autocompletion.
Pros: Zero-cost type safety, no schema maintenance. Cons: Requires TypeScript on both ends, tightly couples client and server.
Server state libraries implement stale-while-revalidate (SWR) caching:
Key caching concepts:
['users', 123], ['posts', { page: 1 }]queryClient.invalidateQueries(['posts'])Optimistic updates make the UI feel instant by updating local state before the server confirms:
This pattern is essential for latency-sensitive interactions (likes, bookmarks, todo completion, drag-and-drop reordering).
Offset-based: GET /api/posts?page=2&limit=20 — simple but breaks when items are added/removed between pages (skipped or duplicated items).
Cursor-based: GET /api/posts?after=abc123&limit=20 — uses an opaque cursor (usually an encoded ID or timestamp) to fetch the next page. Stable pagination regardless of insertions/deletions. Required for infinite scroll.
When the same entity appears in multiple places (a user in a post, in comments, in the sidebar), denormalized data duplicates it everywhere. Normalization stores each entity once by ID:
// Denormalized (duplicated user data)
{ posts: [{ id: 1, author: { id: 5, name: "Alice" } }] }
// Normalized (single source of truth)
{ users: { 5: { id: 5, name: "Alice" } }, posts: { 1: { id: 1, authorId: 5 } } }Apollo Client and Redux with normalizr handle this automatically. TanStack Query uses document-based caching (denormalized) which is simpler but requires manual invalidation of all queries containing a modified entity.
A BFF is a thin server layer that aggregates multiple backend APIs into frontend-optimized endpoints. Instead of the client making 5 separate API calls to assemble a page, the BFF makes those calls server-side and returns a single, pre-shaped response.
Benefits: reduces client-side waterfall requests, hides backend complexity, enables per-platform optimization (mobile BFF returns less data than desktop BFF). Next.js API routes and Server Components naturally serve as a BFF layer.
The data layer's primary architectural decision is separating client state (owned by the browser) from server state (cached copy of backend data). Server state libraries (TanStack Query, SWR) handle caching, background revalidation, and synchronization automatically. Choosing between REST, GraphQL, and tRPC depends on your team's type safety needs, data shape flexibility, and backend architecture. Pagination strategy (cursor vs offset) depends on data mutability. The BFF pattern reduces client complexity by moving aggregation to the server.
Fun Fact
The term 'stale-while-revalidate' originated as an HTTP Cache-Control directive defined in RFC 5861 (2010), long before React libraries adopted it. SWR (the library by Vercel) is literally named after this HTTP header directive.