Papers
Topics
Authors
Recent
2000 character limit reached

LEAP Architecture: Multi-Domain Frameworks

Updated 9 January 2026
  • LEAP Architecture is a collection of diverse, modular systems that span neural networks, hardware accelerators, 3D vision, graph ML, and secure execution environments.
  • It leverages domain-specific methodologies such as latent perturbations, PIM-NoC partitioning, and pose-free 3D modeling to optimize performance.
  • These frameworks yield substantial gains in throughput, energy efficiency, and accuracy, setting new benchmarks in their respective fields.

The term "LEAP Architecture" encompasses a number of notable, structurally distinct systems and architectural frameworks across machine learning, computer architecture, computer vision, quantum circuit synthesis, secure execution, and scientific instrumentation. The following article surveys the technical core of these architectures as originally described in peer-reviewed or archival sources, referencing the specific system acronym expansion or definition in each context.

1. Neural Network Architectures: Latent Encoding of Atypical Perturbation (LEAP-net)

LEAP-net is a neural network architecture designed for modeling curative perturbations in power transmission grids under topological changes (Donnot et al., 2019). The grid is represented as an undirected graph G=(N,E)G = (N,E), where each node i∈Ni \in N has an injection xix_i (generation or load), and each edge j∈Ej \in E is a high-voltage line. A grid topology τ∈{0,1}T\tau \in \{0,1\}^T encodes reconfiguration events.

The LEAP-net mapping proceeds as follows:

  • An encoder EE projects injection vectors to a latent state h=E(x)∈Rdh = E(x) \in \mathbb{R}^d.
  • A latent-perturbation module LÏ„L_\tau receives hh, applies a two-stage process using subnetworks ee and dd:
    • m=e(h)⊙τm = e(h) \odot \tau where e:Rd→RTe: \mathbb{R}^d \to \mathbb{R}^T, and ⊙\odot is element-wise.
    • Δh=d(m)\Delta h = d(m) where d:RT→Rdd: \mathbb{R}^T \to \mathbb{R}^d.
  • The perturbed latent state z=h+Δhz = h + \Delta h is decoded by DD to predicted line flows y^=D(z)\hat{y} = D(z).

The network is trained via mean squared error on power-flow simulation pairs (x,Ï„,y)(x, \tau, y), with an explicit transfer learning protocol: train on a "reference" and all possible "unary" topology changes, then test on previously unseen combinations ("super-generalization") with no parameter adaptation.

2. Accelerator Microarchitectures: LLM Inference on Scalable PIM-NoC Architecture

LEAP denotes a non-von Neumann accelerator integrating Processing-in-Memory (PIM) and a Network-on-Chip (NoC) for LLM inference (Wang et al., 18 Sep 2025). This system employs:

  • Tier 1: PIM arrays adjacent to memory banks (e.g., ReRAM crossbars, SRAM-based compute-in-memory).
  • Tier 2: A 2D-mesh NoC interconnects all PIM arrays and conventional accelerators (e.g., systolic arrays for high-dynamicity layers).

The mapping of LLM operations is orchestrated by a partition controller, distinguishing:

  • Static operations (large matrix-multiplies) mapped to PIM arrays.
  • Dynamic operators (softmax, bias adds, etc.) mapped to NoC-attached IMC engines.

A design-space search refines layer-to-unit mappings to minimize off-chip/on-chip communication and balance memory utilization, formulated as an ILP with tradeoff parameters (α,β)(\alpha,\beta). Tile-based pipelining and fine-grained parallelism allow for high throughput, with typical tile sizes of Tr×Tk=64×256T_r \times T_k = 64 \times 256. Quantitatively, LEAP achieves up to $3$–5×5\times throughput and 20×20\times energy efficiency versus A100 GPU baselines.

3. 3D Computer Vision: Liberate Sparse-view 3D Modeling from Camera Poses (LEAP)

LEAP proposes a fully pose-free approach for multi-view 3D modeling, eliminating the reliance on explicit or estimated camera poses (Jiang et al., 2023). The pipeline:

  • Encodes each RGB input via a frozen ViT backbone, aggregates multi-view features.
  • Holds a learnable voxelwise "neural volume" VV containing scene-agnostic geometry and appearance priors.
  • Lifts 2D features into 3D via feature-similarity-driven cross-attention transformers, followed by self-attention within the volume.
  • The decoded volume is projected to a density-feature field for direct volume rendering without requiring pose refinement or 2D–3D reprojection at inference.

LEAP yields state-of-the-art performance for sparse-view 3D reconstruction, substantially outperforming pose-dependent generalizable NeRFs under adverse pose uncertainties, and operates with a forward-pass runtime ∼400×\sim 400\times faster than optimization-based approaches like PixelNeRF.

In graph machine learning, LEAP (LEArnable toPology augmentation) is a framework for inductive link prediction (Samy et al., 5 Mar 2025), addressing graphs with dynamic node arrival. The architecture comprises:

  • Selection of anchor nodes AA (via degree/PageRank).
  • For each new node ii, an MLP gg predicts "soft" connection weights w~i=g(xi)∈[0,1]k\tilde w_i = g(x_i) \in [0,1]^k to each anchor.
  • The augmented graph connectivity is A~=A+W^+W^⊤\tilde A = A + \hat W + \hat W^\top.
  • A GNN encoder operates on this topology, supporting message passing between both the original and newly inducted nodes.
  • The system is trained with dual loss: an MLP loss aligning w~i,j\tilde w_{i,j} to 1/dist(i,aj)1/\mathrm{dist}(i,a_j) and negative-sampled link prediction loss.

This augmentation methodology improves inductive link prediction by providing in situ topology for new nodes, improving AUC/precision by up to 22%/17% on benchmarks.

5. Secure Execution Environments: TrustZone-Based TEE for Mobile Apps (LEAP)

LEAP, in this context, is a scalable TEE framework for ARM TrustZone supporting resource-adaptive, developer-friendly execution of intelligent mobile apps (Sun et al., 2021). The architecture features:

  • A normal-world kernel module (pKM) and a small trusted OS (tKM) in the secure world.
  • Per-sandbox isolation via TrustZone's stage-2 MMU, handling dynamic allocation and mediation of physical cores, RAM, and peripherals (e.g., GPU) at the page-table level.
  • An offline DevOps tool that splits DL apps into protected (sc-pAPP) and normal (pAPP) components, integrating with the host Linux runtime.
  • Support for dynamic resource adaptation in response to CPU and memory pressure, automatic device handoff/suspension, and negligible virtualization overhead.

LEAP exhibits 3.57×3.57\times speedup relative to prior secure execution frameworks with near-native performance on GPU accelerators.

6. Scientific Instrumentation: The Large European Array for Pulsars (LEAP)

The Large European Array for Pulsars is an ultra-sensitive tied-array radio telescope formed by coherent baseband addition of five European facilities (Bassa et al., 2015). Each site records dual-polarization, Nyquist-sampled streams, with data transferred for offline phase, delay, and polarization calibration, and final coherent summation. The digital pipeline incorporates:

  • Per-site digitization, polyphase filtering, and packetization.
  • Centralized software FX correlation, global fringe fitting, and amplitude weighting using per-telescope system noise.
  • Coherent voltage summation, yielding an effective aperture equivalent to a D=195D=195 m dish.

LEAP achieves coherent summation efficiency exceeding 80%80\% and improves pulse arrival time rms by more than 2×2\times over single-dish data.

7. Other Representative Architectures

  • LEAP-VO (Visual Odometry): Proposes a long-term, anchor-augmented, temporally probabilistic point tracking module as the front-end for robust monocular visual odometry, with explicit uncertainty estimation and inter-track transformer refinement (Chen et al., 2024).
  • LEAP (Quantum Circuit Synthesis): Introduces iterative, prefix-based search, incremental local re-optimization, and dimensionality reduction over A*-guided circuit search, scaling numerical synthesis from four to six qubits (Smith et al., 2021).
  • LEAP (Molecular Synthesisability Scoring): GPT-2-based architecture for route-depth regression from SMILES input, dynamically integrating intermediate-conditioned synthesis accessibility for drug design (Calvi et al., 2024).

8. Comparative Summary Table

System Domain Architectural Principle / Signal Feature
LEAP-net Power grid ML Additive latent perturbations for topology changes
LEAP (PIM-NoC) Hardware/LLM Accel Layer-wise partition over PIM/NoC; DSE + fine tiling
LEAP (3D Vision) CV/NeRF Pose-free neural volume w/ cross-attention lifting
LEAP (GNN-Graph ML) Graph ML Anchor augmentation + GNN for inductive new nodes
LEAP TEE Trusted Mobile Exec Stage-2 MMU sandboxing + dynamic resource mediation
LEAP (EPTA) Pulsar Astronomy Coherent summation of baseband from >5 telescopes
LEAP-VO Visual Odometry Inter-track temporal transformer, uncertainty pred.
LEAP-QC Quantum Synthesis Plateau-detected prefix bands + local re-synthesis
LEAP (Cheminformatics) Molecule Scoring Pre-trained/fine-tuned GPT-2 on route-depth

9. Significance and Broader Impact

The recurrence of the LEAP acronym across several unrelated but technically rigorous architectures is notable for its emphasis on modularity, transferability, and efficient augmentation—be it in latent spaces, physical architectures, or graph topologies. Architectures such as LEAP-net (Donnot et al., 2019) and LEAP (GNN link prediction) (Samy et al., 5 Mar 2025) explicitly encode domain-specific perturbations and inductive biases, respectively, providing significant improvements in generalization or expressivity versus prior art.

The cross-domain utility of the architectural design patterns present in these instantiations of LEAP—modular feature perturbation, dynamic augmentation, and resource allocation—underscores their value as exemplars for future system and algorithm design in fields ranging from power systems and AI hardware to secure execution, scientific instrumentation, and chemical informatics.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to LeaP Architecture.