Papers
Topics
Authors
Recent
2000 character limit reached

Cache-Aware Streamed Lookup

Updated 13 December 2025
  • Cache-aware streamed lookup is a technique that restructures memory accesses and computation phases to fit within cache hierarchies, using tiling and contiguous data layouts.
  • The method minimizes DRAM bandwidth bottlenecks by precomputing and fusing lookup phases within cache-resident tiles, leading to dramatic reductions in cache misses and latency.
  • It is applied in ultra-low-bit LLM inference, large-scale model serving, and networking, significantly improving throughput and reducing compute stage overhead.

Cache-Aware Streamed Lookup refers to a class of methods for maximizing the efficiency of memory-intensive lookup operations by aligning the data access pattern, tile size, and computation phases with the CPU/GPU cache hierarchy. These methods exploit temporal and spatial locality, tiling, and in-place processing to minimize memory bandwidth bottlenecks during large-scale inference, serving, or key-value retrieval tasks. The paradigm is prevalent in high-performance deep learning, large-scale model serving, and networking applications, with contemporary variants targeting ultra-low-bit inference, hardware-aware LLM decoding, and fast IP prefix lookup.

1. Motivation and Foundational Principles

Cache-aware streamed lookup is motivated by the observation that naive lookup or search operations—such as scalar table lookup or random-access key-value retrieval—severely underutilize available memory bandwidth when working sets exceed cache capacity. In these settings, each independent lookup typically incurs a cache miss, causing repetitive and non-contiguous DRAM accesses that limit sustained throughput to a fraction of the memory subsystem's peak.

The principal remedy is to design algorithms where the data layout and block/tile processing schedule are cache-sized and cache-aligned. Once a block of lookup data is loaded into cache (L1/L2/L3 or on-chip SRAM), the associated computation (e.g., table generation, lookup, accumulation) is fused within that cache residency window. This avoids repeated DRAM fetches, amortizes memory access costs, and exploits SIMD/vector units via contiguous vectorized loads.

A core example is Vec-LUT's cache-aware streamed lookup for ultra-low-bit LLM inference. The design constructs a single giant vectorized lookup table, tiles it to match the L1/L2 cache, and fuses precompute and lookup phases such that each memory block is both generated and consumed while still resident in cache. This approach sharply reduces memory channel pressure, yields sequential bandwidth utilization near the system's roofline, and accelerates large-batch computation (Li et al., 6 Dec 2025).

2. Algorithmic Structure and Tensor Layouts

A typical cache-aware streamed lookup strategy is distinguished by its block-tiling and data layout transformation, which orchestrate memory, compute, and accumulator usage:

  • Token-Contiguous Layout: All matrices (input activations, lookup table, output buffer) are stored such that, for a tiled subset of tokens, the relevant memory is laid out contiguously. For Vec-LUT, activations AA, LUT TT, and output OO use layouts:
    • A[(k)×N+n]A[(k)\times N+n] for k[0,K),n[0,N)k\in[0,K), n\in[0,N)
    • T[((k/g)×3g+i)×N+n]T[((k/g)\times3^g+i)\times N+n]
    • O[(m)×N+n]O[(m)\times N+n]
  • Tile-Contiguous Weights: The quantized/packed weights WW use tile-contiguous ordering, packed into blocks along the lookup (group) axis. This ensures that inner loops—especially those vectorizing over tokens—see contiguous address ranges, maximizing cache line exploitation.
  • Tile Size Tuning: The tile sizes (Ktile,Ntile)(K_\text{tile}, N_\text{tile}) are chosen such that the product Ktile×3g×Ntile×2BK_\text{tile}\times 3^g \times N_\text{tile} \times 2\text{B} matches the L1 cache budget, with NtileN_\text{tile} typically set to a multiple of the SIMD width.

Across domains, these layout techniques align lookups, updates, and accumulations into memory regions whose cache residency can be guaranteed through careful sizing and loop structure (Li et al., 6 Dec 2025, Patel et al., 17 Nov 2025, Liu et al., 28 Aug 2025, Yegorov, 2018).

3. Pseudocode and Mathematical Foundations

The streamed lookup family is characterized by blocking, in-cache precompute, and hierarchical accumulation. Example pseudocode from Vec-LUT (Li et al., 6 Dec 2025) illustrates the pattern:

1
2
3
4
5
6
7
8
9
10
for (int k0 = 0; k0 < K/g; k0 += K_tile) {       // tile over groups
  int k_max = min(K/g, k0 + K_tile);
  for (int n0 = 0; n0 < N; n0 += N_tile) {       // tile over tokens
    int n_max = min(N, n0 + N_tile);
    // 1) Allocate LUT_tile in L1 cache
    int16_t LUT_tile[K_tile][3^g][N_tile];
    // 2) Precompute phase (populate tile)
    // 3) Lookup + accumulate phase (consume tile, then reset)
  }
}

Key mathematical formulations underlying these methods include:

  • Sub-table Precompute (Vec-LUT):

Tk,i,n=j=0g1(trit(i,j)1)A[kg+j,n]T_{k,i,n} = \sum_{j=0}^{g-1} \bigl(\mathrm{trit}(i,j)-1\bigr) \cdot A[k\cdot g + j,\, n]

  • 1→N Lookup and Accumulate:

O[m,n]+=k=0K/g1T[k,W[m,k],n]O[m,n] \mathrel{+}= \sum_{k=0}^{K/g-1} T[k,\,W[m,k],\,n]

  • Tile-Cache Constraint:

Ktile3gNtile2B<L1_sizeK_{\text{tile}} \cdot 3^g \cdot N_{\text{tile}} \cdot 2\text{B} < \text{L1\_size}

  • Cache-Line Alignment: Tile layouts ensure that vector lookups of NtileN_{\text{tile}} length fall into sequential cache lines of 64B, optimizing throughput.

This structure generalizes to other domains, such as networking (Yegorov, 2018), where an in-cache binary search tree guides memory-probe-efficient Bloom filter lookups for longest prefix match, or LLM serving (Liu et al., 28 Aug 2025), where cache-aware page selection processes top-KK relevant blocks via bounding-box metadata scoring.

4. Application Domains and System Variants

Cache-aware streamed lookup has emerged as a keystone pattern in diverse high-throughput and low-latency settings:

  • Ultra-Low-Bit LLM Inference: Vec-LUT implements a vector lookup scheme for $1.58$–$4$-bit weight models on CPU, fusing in-cache tile generation and streaming lookup for rapid K×NK\times N matmul in edge devices (Li et al., 6 Dec 2025).
  • Paged LLM Decoding: TinyServe introduces query-aware page selection, partitioning key-value caches into small pages, scoring them via bounding-box metadata, and streaming only the most relevant pages into compute, leveraging fused CUDA kernels for minimal bus and kernel-launch overhead (Liu et al., 28 Aug 2025).
  • Compressive Streaming Memory for Vision-LLMs: CacheFlow combines block packing, GRU-based page indexing, offload/rehydration, and top-KK consensus retrieval—each phase engineered for streaming, cache-locality-preserving memory traffic and low-inference latency (Patel et al., 17 Nov 2025).
  • Networking/Table Lookup: Guided Bloom-filter methods store prefix tables and search trees in CPU L1/L3 cache, reducing main-memory traffic in packet forwarding (Yegorov, 2018).

A generalized table of representative instantiations:

System Core Lookup Structure Cache-Aware Feature
Vec-LUT (Li et al., 6 Dec 2025) Tiled vectorized LUT Fused precompute/lookup in L1 tiles
TinyServe (Liu et al., 28 Aug 2025) Paged KV caches + scoring Bounding-box metadata, top-K sparse fetch
CacheFlow (Patel et al., 17 Nov 2025) Blocked KV + GRU indices Streaming block processing, staged offload/rehydration
Guided-BF (Yegorov, 2018) Bloom-filter + BST All data structures in L1/L3, guided search

5. Quantitative Performance and Empirical Outcomes

Cache-aware streamed lookup yields significant empirical improvements, including:

  • Data Bandwidth Utilization: For Vec-LUT, DRAM bandwidth utilization for lookup operations jumps from ~35% (scalar LUT) to 90% (cache-tiled streaming). Cache misses drop below 10% (Li et al., 6 Dec 2025).
  • Cost Reduction: In matrix-multiply kernels using vector LUT, the “lookup” stage drops from 47% to under 1% of total compute time (Li et al., 6 Dec 2025).
  • End-to-End Speedup:
    • Vec-LUT: 1.8×1.8\times3.2×3.2\times single-thread kernel speedup; 4.2×4.2\times faster mpGeMM for parallel prefilling; $2$–3×3\times higher tokens/s in real edge SoCs (Li et al., 6 Dec 2025).
    • TinyServe: $2.1$–2.3×2.3\times latency reduction and memory savings on TinyLLaMA and GPT2; overall 2.1×2.1\times speedup for LLaMA-1.3B at 32K context (Liu et al., 28 Aug 2025).
    • CacheFlow: Up to 87%87\% fewer tokens processed, 6×6\times reduction in cache size, 2×2\times cut in latency—all at equal or better accuracy versus strong baselines (Patel et al., 17 Nov 2025).
    • Guided-BF: Reduces per-packet bit-lookups by 2×2\times over linear BF and attains an order-of-magnitude throughput gain for IPv6 default-dominated traffic (Yegorov, 2018).

6. Comparative Advantages and Limitations

The main advantages of cache-aware streamed lookup methods include:

  • High Memory Efficiency: Restructuring the lookup procedure so memory accesses are sequential, cache-friendly, and rarely cross cache boundaries.
  • Near-Roofline Throughput: By minimizing random memory access and fusing generation/consumption, these methods approach hardware bandwidth limits.
  • Low Latency, High Throughput: Streaming strategies and in-cache computation minimize expensive DDR/HBM access and kernel launches.
  • Tunability: Both tile sizes (for cache fit) and sparsity/page selection hyperparameters can be tuned for the specific hardware and workload.

Limitations may include:

  • Complex Data Layouts: The required layouts (token-contiguous, tile-major, meta-indexed) add non-trivial implementation complexity.
  • Memory Size Tuning: Aggressive cache-fitting is hardware-specific; mis-estimation harms performance.
  • Workload Suitability: Most beneficial when memory bandwidth or random memory access is the bottleneck; less impact if the model is compute-bound.

7. Historical Context and Future Prospects

Cache-aware streamed lookup draws from longstanding research in cache-friendly data structures (e.g., explicit blocking, cache-aware BSTs), with modern techniques motivated by deep learning inference, scalable model serving, and high-speed networking (Yegorov, 2018). The paradigm has evolved to support emerging needs:

A plausible implication is continued adaptation to more heterogeneous hardware, increasing context windows in LLMs, and integration of dynamic sparsity and out-of-core memory management, making cache-aware streamed lookup central to the next generation of high-throughput, low-latency AI and networking systems.

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Cache-Aware Streamed Lookup.