⚡ From 90s to 20s: How Parallel Pagination Transformed Our Frontend Performance
"Performance isn't just about optimizing backend queries. It starts with how the frontend chooses to fetch data."
The Real-World Scenario
In a recent project I joined — which had been in development for five years — I was assigned a speed optimization task focused solely on the frontend. I worked on a checklist dashboard that visualized cleaning progress across multiple units (apartments). The structure was simple, but the scale was massive:
- Rows: checklist items (tasks)
- Columns: unit IDs
- Cells: task status per unit, such as:
Supported job types:
- "Quick clean"
- "Standard clean"
- "Deep clean"
- "Audit clean"
- "Maintenance check"
- "Light inspection"
- "Heavy cleaning"
- …and others
Each job has different completion states (e.g., done, partial, missed), and some require more complex data like ratings or uploaded images.
The Original Problem: Sequential Fetching
Initially, the frontend fetched checklist data sequentially:
for (let page = 1; page <= totalPages; page++) {
const res = await fetchPage(page);
allData.push(...res.data);
}
That seemed fine until the real dataset came in:
- 100 checklist items
- 1,200 units
- API returns 1,000 records per page → 120 pages total
- Each page request took ~700ms
Total load time: ~90 seconds
Result?
- Long UI freezes
- Users thought the system was broken
- Massive impact on usability and productivity
The Fix: Parallel Pagination
We asked ourselves: "Why wait for one page before starting the next?"
As long as the backend can handle the load, we can fetch multiple pages in parallel.
Step-by-step Idea:
- Fetch total record count
const { totalCount } = await fetchTotalCount();
const totalPages = Math.ceil(totalCount / limit);
- Generate all page numbers
const pages = Array.from({ length: totalPages }, (_, i) => i + 1);
- Batch and fetch concurrently
const batchSize = 5;
while (pages.length > 0) {
const chunk = pages.splice(0, batchSize);
const results = await Promise.all(chunk.map((page) => fetchPage(page)));
merged.push(...results.flat());
}
✅ This reduced total load time from 90s → ~20s, with better perceived performance and early progressive rendering.
Suggested Interface
interface PaginationResponse<T> {
data: T[];
pagination: {
page: number;
limit: number;
totalCount: number;
hasNextPage: boolean;
};
}
type FetchPageFn<T> = (params: { page: number; limit: number }) => Promise<PaginationResponse<T>>;
interface ParallelFetchOptions<T> {
fetch: FetchPageFn<T>;
fetchTotalCount: () => Promise<{ totalCount: number }>;
limit: number;
parallelCount?: number;
}
⚠️ Challenges We Faced
Problem | How We Handled It |
---|---|
Backend overload | Introduced parallelCount cap |
Network timeouts | Retry failed pages with a retry strategy |
Progress estimation | Manual tracking via page count |
Early rendering needs | (see Part 2) Progressive rendering |
Before vs After
Metric | Sequential Fetch | Parallel Fetch |
---|---|---|
Total time | ~90 seconds | ~20 seconds |
Perceived UX | Very poor | Fast & smooth |
Implementation | Simple | Slightly more complex |
Error tolerance | Weak | Controlled via batching/retry |
Beyond Parallel Fetching
Parallel fetching is a solid foundation, but to truly elevate the user experience for large-scale data, consider the following advanced strategies:
Other techniques to explore in future articles:
-
Virtual Scroll & Progressive Rendering
Usereact-window
,Ag-Grid
, or custom virtualization logic to:- Render only visible DOM elements
- Avoid freezing the UI with too many rows
- Load visible content first, progressively fill in the rest
-
Lazy Fetching by Viewport
Divide data into visible chunks and:- Load only what the user can see
- Preload data as the user scrolls
- Use caching, background updates, and loading indicators
-
Streamed Pagination / Server-Sent Chunks
If the backend supports it, stream paginated data incrementally.
This is ideal for APIs with high latency or massive datasets.
Final Thoughts
Note: The code examples in this article are for illustration purposes only to help you understand the ideas and approach more clearly.
Pagination is great.
But sequential fetching is not always ideal.
With parallel pagination:
- You respect API pagination structure
- You optimize user-perceived speed
- You build scalable UX without changing backend
"Slow UI isn’t caused by big data — it’s caused by fetching it the wrong way."