Stepan Samko | Consulting [email protected]

Building a High-Scale Real-Time Portfolio Engine for a Multi-Chain DeFi Platform

An engineering story about designing a real-time data engine that stayed smooth under extreme scale — and the performance architecture behind it.


Introduction

This case study describes how I designed and implemented a real-time portfolio engine for a high-traffic DeFi analytics platform. The engine powered a dashboard that aggregated data across:

The challenge was not “fetch and display some balances.” It was stream, merge, normalize, aggregate, filter, and recompute derived data in real time — without freezing the UI, and without overwhelming the network or main thread.

At the peak, the platform tracked:

This was not a problem solved by adding React memo or splitting components. It required a full data architecture designed for flow, batching, incremental updates, and reactive performance.

This is the story of how it was built.


Background & Motivation

Originally, the system relied on HTTP requests per protocol, per chain, per wallet. This worked fine when the app supported:

But as adoption grew and chains multiplied, HTTP became the bottleneck:

Worse: each user had their own combination of wallets and protocols, so server caching was ineffective.

We needed something that:

  1. Streamed updates continuously
  2. Avoided rendering storms
  3. Merged incremental data safely
  4. Updated derived data instantly
  5. Scaled with users who had hundreds of positions

So we rebuilt the entire client-side data pipeline around Server-Sent Events (SSE), normalized state, and batched updates.


System Overview

Here is the high-level architecture of the final system.

flowchart TD
    A[Blockchain Networks] --> B[Backend Aggregator]
    B -->|SSE Stream| C[Browser EventSource]
    C --> D[Batch Collector]
    D --> E["Normalized Store
    (Redux Slice)"]
    E --> F["Selectors
    (Reselect)"]
    F --> G[UI Components]
    
    G -->|User Filters/Search| F
    G -->|Direct Protocol Page| H[Priority Fetch via RTK Query]
    H --> E

Key points:


Streaming Architecture (SSE)

Why SSE?

WebSockets were unnecessary:

The Stream

The stream sent incremental updates every ~5 minutes or when user pressed “refresh”.

Examples of events:

{
  "type": "wallet_update",
  "wallet": "0xabc...",
  "chain": "ethereum",
  "protocol": "aave",
  "positions": [...]
}

Important: updates were fragments. Not complete snapshots, which meant:


The Data Flow Inside the Frontend

Step-by-step

sequenceDiagram
    participant SSE as SSE Stream
    participant BC as Batch Collector
    participant NS as Normalized Store
    participant SEL as Selectors
    participant UI as UI

    SSE->>BC: Incremental event
    BC->>BC: Collect for N ms
    BC->>NS: Apply merged batch
    NS->>SEL: Trigger selector recomputation
    SEL->>UI: Provide updated derived data
    UI->>SEL: Apply filters instantly

Batch Collector

The Batch Collector reduced pressure on React by grouping events:

Pseudocode (simplified):

let queue = [];
let timeout = null;

function onSseEvent(evt) {
  queue.push(parse(evt));

  if (!timeout) {
    timeout = setTimeout(flushQueue, 100);
  }
}

function flushQueue() {
  const batch = mergeEvents(queue);
  store.dispatch(updateFromSse(batch));
  
  queue = [];
  timeout = null;
}

State Architecture & Normalization

Why Normalization?

We had:

Non-normalized state leads to:

Normalized Model

The normalized store looked roughly like:

{
  wallets: { [walletId]: { protocolIds: [], ... } },
  protocols: { [protocolId]: { chainIds: [], ... } },
  chains: { [chainId]: { positionIds: [], ... } },
  positions: { [positionId]: { type, balance, ... } }
}

Every SSE update would:

  1. upsert wallets
  2. upsert protocols
  3. upsert chains
  4. upsert positions
  5. update relationships

Redux Slice Structure

const portfolioSlice = createSlice({
  name: "portfolio",
  initialState,
  reducers: {
    updateFromSse(state, action) {
      for (const event of action.payload.events) {
        mergeWallet(state, event);
        mergeProtocol(state, event);
        mergeChain(state, event);
        mergePositions(state, event);
      }
    }
  }
});

Derived Data & Selectors

The engine needed to compute:

These computations were expensive. We solved this with Reselect selectors and dependency graphs.

Selector Graph

graph TD
    A[Raw Positions] --> B[Chain Aggregation]
    A --> C[Protocol Aggregation]
    A --> D[Token Aggregation]
    B --> E[Filtered Chains]
    C --> F[Filtered Protocols]
    D --> G[Filtered Tokens]
    H[UI Filters] --> E
    H --> F
    H --> G

Memoized Selector Example

export const selectProtocolBalance = createSelector(
  [
    state => state.positions,
    (_, protocolId) => protocolId,
  ],
  (positions, protocolId) => {
    return sum(positions.filter(p => p.protocol === protocolId));
  }
);

Selectors ensured:


Real-Time UX Concerns

Avoiding UI Freezes

We optimized:

Special care went to not putting large arrays in React component state.

Avoiding Render Storms

Two strategies:

  1. Dispatch batching
  2. Normalized updates
  3. Selector-level memoization
  4. Top-level presentational components only re-rendering on meaningful changes

Priority Fetching

When a user opened a protocol page, the system:

Pseudocode:

if (!hasAllChains(protocolId)) {
  dispatch(fetchProtocolChains(protocolId))
}

This ensured the UI loaded ASAP even when the SSE stream was still catching up.


Performance Characteristics

The improvement was dramatic:

AreaBeforeAfter
Loadingmany parallel HTTP requestssingle SSE stream + selective fetch
UI Responsivenessfrequent freezessmooth even with huge portfolios
Filteringslow, recomputed everythinginstant, selector-based
Networkbursty, redundantincremental, lightweight
User Experiencejumpy, slowreal-time, fluid

The dashboard stayed fully responsive even for “DeFi degen” portfolios with:


Frontend Pipeline in Detail

Data enters the engine:

  1. Stream arrives via SSE
  2. Event is parsed
  3. Added to batch queue
  4. Merged into normalized store
  5. Selectors recompute minimal changes
  6. UI updates instantly
  7. User filters feed back into selectors
flowchart LR
    IN(SSE Event) --> Q(Batch Queue)
    Q --> M(Merge)
    M --> NS(Normalized Store)
    NS --> S(Selectors)
    S --> OUT(UI)

Representative Pseudocode

SSE Handler

const source = new EventSource("/stream");

source.onmessage = (msg) => {
  const event = JSON.parse(msg.data);
  onSseEvent(event);
};

source.onerror = () => {
  // Reconnect logic + backoff
};

Merge Function

function mergeWallet(state, evt) {
  const id = evt.wallet;
  if (!state.wallets[id]) state.wallets[id] = createWallet(id);

  // append protocol, chain, etc.
  if (!state.wallets[id].protocolIds.includes(evt.protocol)) {
    state.wallets[id].protocolIds.push(evt.protocol);
  }
}

Aggregation Example

export const selectChainTotals = createSelector(
  [state => state.positions, (_, chain) => chain],
  (positions, chain) => {
    return positions
      .filter(p => p.chain === chain)
      .reduce((acc, p) => acc + p.balance, 0);
  }
);

Internal Client-Side Architecture Diagram

flowchart TD
    A[SSE Source] --> B[Event Parser]
    B --> C[Batch Queue]
    C --> D[Merge Reducer]
    D --> E[Normalized Store]
    E --> F[Selector Layer]
    F --> G[UI]

What This Architecture Enabled

The new engine became a foundation for multiple future features:

Because the engine provided fast lookup and predictable updates, other teams built new features on top without worrying about performance.


Engineering Trade-offs

Advantages

Costs


Limitations & Potential Improvements

Even though the system was performing well, future improvements could include:

Web Workers

Offload:

At the time, we didn’t need workers because the batching and memoization were enough.

Local-first caching layer

Persist last-known portfolio to survive reloads instantly.


Results

The final UX characteristics:

This architecture became a core engine used across multiple parts of the product.


Key Takeaways for Engineers Building Real-Time Frontends

1. Normalize early, normalize always

Avoid deep nested state — it slows everything.

2. Batch updates

Continuous dispatch = death by a thousand cuts.

3. Selectors are your performance layer

Treat selectors as a computation graph.

4. Derived data is not free

Make it predictable, memoized, granular.

5. Streaming beats polling

Especially when the system scales across many protocols.

6. UX is the final metric

Real-time systems fail not when data is slow — but when UI feels heavy.


Final Thoughts

This project transformed the platform’s performance profile. It also shaped my approach as a web performance engineer:

In the end, what mattered most was the user experience: the dashboard stayed smooth, responsive, and trustworthy — even under extreme data loads.

And that’s what great frontend architecture should deliver.