React Performance Optimization: From 3s to 300ms Load Times
Practical techniques I used to cut a React application's load time by 10x — covering code splitting, lazy loading, memoization, and bundle analysis strategies.
Performance isn't a feature — it's a requirement. Every 100ms of latency costs you conversions, engagement, and user trust. Here's how I took a sluggish React dashboard from a 3-second initial load down to under 300ms.
The Problem
The application was a SaaS analytics dashboard built with React, Redux, and a dozen heavy charting libraries. The initial bundle was 2.4MB gzipped. Users on slower connections were staring at a blank screen for 3+ seconds.
Diagnosis: Bundle Analysis
Before optimizing, you need to measure. I used webpack-bundle-analyzer to visualize the bundle:
npx webpack-bundle-analyzer build/stats.json
The results were revealing:
- moment.js — 232KB (with all locales)
- lodash — 71KB (full library imported)
- chart.js — 205KB (loaded on every page)
- Three charting libraries — used on only 2 of 15 pages
Strategy 1: Code Splitting with React.lazy
The biggest win came from splitting the bundle by route:
import { lazy, Suspense } from 'react';
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Analytics = lazy(() => import('./pages/Analytics'));
const Settings = lazy(() => import('./pages/Settings'));
function App() {
return (
<Suspense fallback={<PageSkeleton />}>
<Routes>
<Route path="/" element={<Dashboard />} />
<Route path="/analytics" element={<Analytics />} />
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>
);
}
This alone cut the initial bundle from 2.4MB to 800KB.
Strategy 2: Tree-Shake Imports
Replace barrel imports with specific imports:
// Before: imports entire library (71KB)
import { debounce, throttle } from 'lodash';
// After: imports only what's needed (4KB)
import debounce from 'lodash/debounce';
import throttle from 'lodash/throttle';
For date handling, I replaced moment.js with date-fns:
// Before: moment.js (232KB)
import moment from 'moment';
moment(date).format('MMM DD, YYYY');
// After: date-fns (tree-shakeable, ~6KB per function)
import { format } from 'date-fns';
format(date, 'MMM dd, yyyy');
Strategy 3: Memoization
Expensive computations were running on every render. Strategic memoization fixed this:
const processedData = useMemo(() => {
return rawData
.filter(item => item.status === 'active')
.map(item => ({
...item,
growth: calculateGrowth(item.metrics),
trend: computeTrend(item.history),
}))
.sort((a, b) => b.growth - a.growth);
}, [rawData]);
I also wrapped child components with React.memo to prevent unnecessary re-renders:
const DataRow = React.memo(({ item, onSelect }: DataRowProps) => {
return (
<tr onClick={() => onSelect(item.id)}>
<td>{item.name}</td>
<td>{item.value}</td>
</tr>
);
});
Strategy 4: Virtualization
The dashboard had tables with 10,000+ rows. Rendering all of them was a performance killer:
import { useVirtualizer } from '@tanstack/react-virtual';
function VirtualTable({ data }: { data: Row[] }) {
const parentRef = useRef<HTMLDivElement>(null);
const virtualizer = useVirtualizer({
count: data.length,
getScrollElement: () => parentRef.current,
estimateSize: () => 48,
overscan: 10,
});
return (
<div ref={parentRef} style={{ height: '600px', overflow: 'auto' }}>
<div style={{ height: virtualizer.getTotalSize() }}>
{virtualizer.getVirtualItems().map((virtualRow) => (
<div key={virtualRow.key} /* ... */>
{data[virtualRow.index].name}
</div>
))}
</div>
</div>
);
}
This dropped the table render time from 1200ms to 15ms.
Strategy 5: Image Optimization
Using Next.js Image component with proper sizing and formats:
import Image from 'next/image';
<Image
src={user.avatar}
alt={user.name}
width={48}
height={48}
loading="lazy"
placeholder="blur"
blurDataURL={user.avatarBlurHash}
/>
Results
| Metric | Before | After | Improvement | |--------|--------|-------|-------------| | Initial Bundle | 2.4MB | 340KB | -86% | | First Contentful Paint | 3.1s | 0.28s | -91% | | Time to Interactive | 4.2s | 0.9s | -79% | | Lighthouse Score | 42 | 96 | +128% |
Key Takeaways
- Measure first — don't optimize blindly
- Code split by route — users shouldn't download code they don't need
- Tree-shake imports — named imports from barrels are treacherous
- Virtualize long lists — never render 10,000 DOM nodes
- Memoize expensive work — but don't memoize everything
- Set a performance budget — and enforce it in CI
Performance optimization is iterative. Ship the biggest wins first, measure the impact, and keep refining.
If this was useful, share it with your network or save the link for later.
Connect with me on LinkedIn
If this sparked an idea, send a connection request or message me. I share notes on systems, performance, and product-minded engineering there too.
Related Posts
Continue reading
More writing on adjacent architecture, performance, and infrastructure topics.
Building Scalable Microservices with Node.js and Kubernetes
A deep dive into designing and deploying production-grade microservices that handle millions of requests using Node.js, Docker, and Kubernetes orchestration.
Infrastructure as Code: Mastering Terraform for Cloud-Native Deployments
How I use Terraform to provision and manage cloud infrastructure across AWS, with modules, state management, and CI/CD integration for zero-downtime deployments.