Building a High-Scale Real-Time Portfolio Engine for a Multi-Chain DeFi Platform
An engineering story about designing a real-time data engine that stayed smooth under extreme scale — and the performance architecture behind it.
Introduction
This case study describes how I designed and implemented a real-time portfolio engine for a high-traffic DeFi analytics platform. The engine powered a dashboard that aggregated data across:
- many wallets
- dozens of DeFi protocols
- multiple blockchain networks
- several types of positions (staking, lending, LP, borrowing, etc.)
The challenge was not “fetch and display some balances.” It was stream, merge, normalize, aggregate, filter, and recompute derived data in real time — without freezing the UI, and without overwhelming the network or main thread.
At the peak, the platform tracked:
- $356B TVL
- 125K+ contracts
- 133+ protocols
- 16+ networks
- 13M+ tokens
- 3M+ users
This was not a problem solved by adding React memo or splitting components. It required a full data architecture designed for flow, batching, incremental updates, and reactive performance.
This is the story of how it was built.
Background & Motivation
Originally, the system relied on HTTP requests per protocol, per chain, per wallet. This worked fine when the app supported:
- a handful of chains
- a small number of protocols
- small user portfolios
But as adoption grew and chains multiplied, HTTP became the bottleneck:
- Too many calls
- Too much duplicate data
- Too slow for real-time
- Too heavy for the main thread
- Impossible to keep consistent over time
Worse: each user had their own combination of wallets and protocols, so server caching was ineffective.
We needed something that:
- Streamed updates continuously
- Avoided rendering storms
- Merged incremental data safely
- Updated derived data instantly
- Scaled with users who had hundreds of positions
So we rebuilt the entire client-side data pipeline around Server-Sent Events (SSE), normalized state, and batched updates.
System Overview
Here is the high-level architecture of the final system.
flowchart TD
A[Blockchain Networks] --> B[Backend Aggregator]
B -->|SSE Stream| C[Browser EventSource]
C --> D[Batch Collector]
D --> E["Normalized Store
(Redux Slice)"]
E --> F["Selectors
(Reselect)"]
F --> G[UI Components]
G -->|User Filters/Search| F
G -->|Direct Protocol Page| H[Priority Fetch via RTK Query]
H --> E
Key points:
- SSE is the backbone: streams incremental updates for all wallets, protocols, chains.
- Batch Collector groups events for N ms to avoid continuous render loops.
- Normalized Store ensures fast lookup and minimal dependent re-renders.
- Selectors compute derived data (chain totals, protocol totals, token-level aggregations).
- RTK Query is still used for priority fetches when user opens a protocol page.
Streaming Architecture (SSE)
Why SSE?
WebSockets were unnecessary:
- the data is server-push, not bidirectional
- reliability > interactivity
- reconnection behavior was simpler
- SSE used fewer resources under heavy load
The Stream
The stream sent incremental updates every ~5 minutes or when user pressed “refresh”.
Examples of events:
{
"type": "wallet_update",
"wallet": "0xabc...",
"chain": "ethereum",
"protocol": "aave",
"positions": [...]
}
Important: updates were fragments. Not complete snapshots, which meant:
- merging logic had to be deterministic
- partial chain data could arrive before protocol resolution
- fields could be added or removed depending on position type
The Data Flow Inside the Frontend
Step-by-step
sequenceDiagram
participant SSE as SSE Stream
participant BC as Batch Collector
participant NS as Normalized Store
participant SEL as Selectors
participant UI as UI
SSE->>BC: Incremental event
BC->>BC: Collect for N ms
BC->>NS: Apply merged batch
NS->>SEL: Trigger selector recomputation
SEL->>UI: Provide updated derived data
UI->>SEL: Apply filters instantly
Batch Collector
The Batch Collector reduced pressure on React by grouping events:
- avoids hundreds of dispatches per minute
- merges updates inside the batch
- flushes once per interval (e.g., 80–120ms)
Pseudocode (simplified):
let queue = [];
let timeout = null;
function onSseEvent(evt) {
queue.push(parse(evt));
if (!timeout) {
timeout = setTimeout(flushQueue, 100);
}
}
function flushQueue() {
const batch = mergeEvents(queue);
store.dispatch(updateFromSse(batch));
queue = [];
timeout = null;
}
State Architecture & Normalization
Why Normalization?
We had:
- many wallets → many protocols → many chains → many positions
- different schemas per position type
- frequent incremental updates
Non-normalized state leads to:
- deep nested updates
- slow diffs
- massive re-renders
- selectors that break memoization
Normalized Model
The normalized store looked roughly like:
{
wallets: { [walletId]: { protocolIds: [], ... } },
protocols: { [protocolId]: { chainIds: [], ... } },
chains: { [chainId]: { positionIds: [], ... } },
positions: { [positionId]: { type, balance, ... } }
}
Every SSE update would:
- upsert wallets
- upsert protocols
- upsert chains
- upsert positions
- update relationships
Redux Slice Structure
const portfolioSlice = createSlice({
name: "portfolio",
initialState,
reducers: {
updateFromSse(state, action) {
for (const event of action.payload.events) {
mergeWallet(state, event);
mergeProtocol(state, event);
mergeChain(state, event);
mergePositions(state, event);
}
}
}
});
Derived Data & Selectors
The engine needed to compute:
- total balances per chain
- total balances per protocol
- total balances per token
- cross-filtered results
- search results
- inferred stats (e.g., LP breakdowns)
These computations were expensive. We solved this with Reselect selectors and dependency graphs.
Selector Graph
graph TD
A[Raw Positions] --> B[Chain Aggregation]
A --> C[Protocol Aggregation]
A --> D[Token Aggregation]
B --> E[Filtered Chains]
C --> F[Filtered Protocols]
D --> G[Filtered Tokens]
H[UI Filters] --> E
H --> F
H --> G
Memoized Selector Example
export const selectProtocolBalance = createSelector(
[
state => state.positions,
(_, protocolId) => protocolId,
],
(positions, protocolId) => {
return sum(positions.filter(p => p.protocol === protocolId));
}
);
Selectors ensured:
- expensive computation runs only when input changes
- filters apply instantly without recomputation of everything
- UI stays smooth even under huge datasets
Real-Time UX Concerns
Avoiding UI Freezes
We optimized:
- merging algorithms
- lookup tables
- selector granularity
- update frequency
- batch size
- render boundaries
- React components structure
Special care went to not putting large arrays in React component state.
Avoiding Render Storms
Two strategies:
- Dispatch batching
- Normalized updates
- Selector-level memoization
- Top-level presentational components only re-rendering on meaningful changes
Priority Fetching
When a user opened a protocol page, the system:
- checked if SSE already delivered some chains
- immediately fetched missing chains via RTK Query
- bypassed SSE batching
- merged results safely
Pseudocode:
if (!hasAllChains(protocolId)) {
dispatch(fetchProtocolChains(protocolId))
}
This ensured the UI loaded ASAP even when the SSE stream was still catching up.
Performance Characteristics
The improvement was dramatic:
| Area | Before | After |
|---|---|---|
| Loading | many parallel HTTP requests | single SSE stream + selective fetch |
| UI Responsiveness | frequent freezes | smooth even with huge portfolios |
| Filtering | slow, recomputed everything | instant, selector-based |
| Network | bursty, redundant | incremental, lightweight |
| User Experience | jumpy, slow | real-time, fluid |
The dashboard stayed fully responsive even for “DeFi degen” portfolios with:
- many wallets
- dozens of protocols
- positions across 10+ chains
- complex schemas
- LP tokens, borrowing, lending, staking, etc.
Frontend Pipeline in Detail
Data enters the engine:
- Stream arrives via SSE
- Event is parsed
- Added to batch queue
- Merged into normalized store
- Selectors recompute minimal changes
- UI updates instantly
- User filters feed back into selectors
flowchart LR
IN(SSE Event) --> Q(Batch Queue)
Q --> M(Merge)
M --> NS(Normalized Store)
NS --> S(Selectors)
S --> OUT(UI)
Representative Pseudocode
SSE Handler
const source = new EventSource("/stream");
source.onmessage = (msg) => {
const event = JSON.parse(msg.data);
onSseEvent(event);
};
source.onerror = () => {
// Reconnect logic + backoff
};
Merge Function
function mergeWallet(state, evt) {
const id = evt.wallet;
if (!state.wallets[id]) state.wallets[id] = createWallet(id);
// append protocol, chain, etc.
if (!state.wallets[id].protocolIds.includes(evt.protocol)) {
state.wallets[id].protocolIds.push(evt.protocol);
}
}
Aggregation Example
export const selectChainTotals = createSelector(
[state => state.positions, (_, chain) => chain],
(positions, chain) => {
return positions
.filter(p => p.chain === chain)
.reduce((acc, p) => acc + p.balance, 0);
}
);
Internal Client-Side Architecture Diagram
flowchart TD
A[SSE Source] --> B[Event Parser]
B --> C[Batch Queue]
C --> D[Merge Reducer]
D --> E[Normalized Store]
E --> F[Selector Layer]
F --> G[UI]
What This Architecture Enabled
The new engine became a foundation for multiple future features:
- multi-wallet management
- real-time notifications
- advanced filtering (per chain, per protocol, per token)
- complex derived analytics
- instant insights page
- personalized dashboards
- faster load times during spikes
Because the engine provided fast lookup and predictable updates, other teams built new features on top without worrying about performance.
Engineering Trade-offs
Advantages
- consistent real-time UX
- predictable update flow
- minimal duplication
- scalable with number of chains & protocols
- stable under heavy load
- no main-thread spikes
Costs
- more complex client-side architecture
- custom merging logic
- need for strong internal invariants
- careful selector dependency management
Limitations & Potential Improvements
Even though the system was performing well, future improvements could include:
Web Workers
Offload:
- large aggregations
- LP breakdown computations
- complex chain-level merges
At the time, we didn’t need workers because the batching and memoization were enough.
Local-first caching layer
Persist last-known portfolio to survive reloads instantly.
Results
The final UX characteristics:
- Instant filtering
- Zero UI freezes even under extreme multi-chain datasets
- Smooth real-time updates
- Stable behavior under high traffic
- Reduced network load
- Fast initial load for protocol-heavy users
- Predictable render patterns
This architecture became a core engine used across multiple parts of the product.
Key Takeaways for Engineers Building Real-Time Frontends
1. Normalize early, normalize always
Avoid deep nested state — it slows everything.
2. Batch updates
Continuous dispatch = death by a thousand cuts.
3. Selectors are your performance layer
Treat selectors as a computation graph.
4. Derived data is not free
Make it predictable, memoized, granular.
5. Streaming beats polling
Especially when the system scales across many protocols.
6. UX is the final metric
Real-time systems fail not when data is slow — but when UI feels heavy.
Final Thoughts
This project transformed the platform’s performance profile. It also shaped my approach as a web performance engineer:
- Real-time pipelines need flow, not just data.
- Rendering should be a controlled side-effect, not the default reaction.
- Performance is not just “make it fast” — it’s make it predictable.
In the end, what mattered most was the user experience: the dashboard stayed smooth, responsive, and trustworthy — even under extreme data loads.
And that’s what great frontend architecture should deliver.