LEAP Architecture: Multi-Domain Frameworks
- LEAP Architecture is a collection of diverse, modular systems that span neural networks, hardware accelerators, 3D vision, graph ML, and secure execution environments.
- It leverages domain-specific methodologies such as latent perturbations, PIM-NoC partitioning, and pose-free 3D modeling to optimize performance.
- These frameworks yield substantial gains in throughput, energy efficiency, and accuracy, setting new benchmarks in their respective fields.
The term "LEAP Architecture" encompasses a number of notable, structurally distinct systems and architectural frameworks across machine learning, computer architecture, computer vision, quantum circuit synthesis, secure execution, and scientific instrumentation. The following article surveys the technical core of these architectures as originally described in peer-reviewed or archival sources, referencing the specific system acronym expansion or definition in each context.
1. Neural Network Architectures: Latent Encoding of Atypical Perturbation (LEAP-net)
LEAP-net is a neural network architecture designed for modeling curative perturbations in power transmission grids under topological changes (Donnot et al., 2019). The grid is represented as an undirected graph , where each node has an injection (generation or load), and each edge is a high-voltage line. A grid topology encodes reconfiguration events.
The LEAP-net mapping proceeds as follows:
- An encoder projects injection vectors to a latent state .
- A latent-perturbation module receives , applies a two-stage process using subnetworks and :
- where , and is element-wise.
- where .
- The perturbed latent state is decoded by to predicted line flows .
The network is trained via mean squared error on power-flow simulation pairs , with an explicit transfer learning protocol: train on a "reference" and all possible "unary" topology changes, then test on previously unseen combinations ("super-generalization") with no parameter adaptation.
2. Accelerator Microarchitectures: LLM Inference on Scalable PIM-NoC Architecture
LEAP denotes a non-von Neumann accelerator integrating Processing-in-Memory (PIM) and a Network-on-Chip (NoC) for LLM inference (Wang et al., 18 Sep 2025). This system employs:
- Tier 1: PIM arrays adjacent to memory banks (e.g., ReRAM crossbars, SRAM-based compute-in-memory).
- Tier 2: A 2D-mesh NoC interconnects all PIM arrays and conventional accelerators (e.g., systolic arrays for high-dynamicity layers).
The mapping of LLM operations is orchestrated by a partition controller, distinguishing:
- Static operations (large matrix-multiplies) mapped to PIM arrays.
- Dynamic operators (softmax, bias adds, etc.) mapped to NoC-attached IMC engines.
A design-space search refines layer-to-unit mappings to minimize off-chip/on-chip communication and balance memory utilization, formulated as an ILP with tradeoff parameters . Tile-based pipelining and fine-grained parallelism allow for high throughput, with typical tile sizes of . Quantitatively, LEAP achieves up to $3$– throughput and energy efficiency versus A100 GPU baselines.
3. 3D Computer Vision: Liberate Sparse-view 3D Modeling from Camera Poses (LEAP)
LEAP proposes a fully pose-free approach for multi-view 3D modeling, eliminating the reliance on explicit or estimated camera poses (Jiang et al., 2023). The pipeline:
- Encodes each RGB input via a frozen ViT backbone, aggregates multi-view features.
- Holds a learnable voxelwise "neural volume" containing scene-agnostic geometry and appearance priors.
- Lifts 2D features into 3D via feature-similarity-driven cross-attention transformers, followed by self-attention within the volume.
- The decoded volume is projected to a density-feature field for direct volume rendering without requiring pose refinement or 2D–3D reprojection at inference.
LEAP yields state-of-the-art performance for sparse-view 3D reconstruction, substantially outperforming pose-dependent generalizable NeRFs under adverse pose uncertainties, and operates with a forward-pass runtime faster than optimization-based approaches like PixelNeRF.
4. Reinforcement Learning and Graph ML: Inductive Link Prediction via Learnable Topology Augmentation (LEAP)
In graph machine learning, LEAP (LEArnable toPology augmentation) is a framework for inductive link prediction (Samy et al., 5 Mar 2025), addressing graphs with dynamic node arrival. The architecture comprises:
- Selection of anchor nodes (via degree/PageRank).
- For each new node , an MLP predicts "soft" connection weights to each anchor.
- The augmented graph connectivity is .
- A GNN encoder operates on this topology, supporting message passing between both the original and newly inducted nodes.
- The system is trained with dual loss: an MLP loss aligning to and negative-sampled link prediction loss.
This augmentation methodology improves inductive link prediction by providing in situ topology for new nodes, improving AUC/precision by up to 22%/17% on benchmarks.
5. Secure Execution Environments: TrustZone-Based TEE for Mobile Apps (LEAP)
LEAP, in this context, is a scalable TEE framework for ARM TrustZone supporting resource-adaptive, developer-friendly execution of intelligent mobile apps (Sun et al., 2021). The architecture features:
- A normal-world kernel module (pKM) and a small trusted OS (tKM) in the secure world.
- Per-sandbox isolation via TrustZone's stage-2 MMU, handling dynamic allocation and mediation of physical cores, RAM, and peripherals (e.g., GPU) at the page-table level.
- An offline DevOps tool that splits DL apps into protected (sc-pAPP) and normal (pAPP) components, integrating with the host Linux runtime.
- Support for dynamic resource adaptation in response to CPU and memory pressure, automatic device handoff/suspension, and negligible virtualization overhead.
LEAP exhibits speedup relative to prior secure execution frameworks with near-native performance on GPU accelerators.
6. Scientific Instrumentation: The Large European Array for Pulsars (LEAP)
The Large European Array for Pulsars is an ultra-sensitive tied-array radio telescope formed by coherent baseband addition of five European facilities (Bassa et al., 2015). Each site records dual-polarization, Nyquist-sampled streams, with data transferred for offline phase, delay, and polarization calibration, and final coherent summation. The digital pipeline incorporates:
- Per-site digitization, polyphase filtering, and packetization.
- Centralized software FX correlation, global fringe fitting, and amplitude weighting using per-telescope system noise.
- Coherent voltage summation, yielding an effective aperture equivalent to a m dish.
LEAP achieves coherent summation efficiency exceeding and improves pulse arrival time rms by more than over single-dish data.
7. Other Representative Architectures
- LEAP-VO (Visual Odometry): Proposes a long-term, anchor-augmented, temporally probabilistic point tracking module as the front-end for robust monocular visual odometry, with explicit uncertainty estimation and inter-track transformer refinement (Chen et al., 2024).
- LEAP (Quantum Circuit Synthesis): Introduces iterative, prefix-based search, incremental local re-optimization, and dimensionality reduction over A*-guided circuit search, scaling numerical synthesis from four to six qubits (Smith et al., 2021).
- LEAP (Molecular Synthesisability Scoring): GPT-2-based architecture for route-depth regression from SMILES input, dynamically integrating intermediate-conditioned synthesis accessibility for drug design (Calvi et al., 2024).
8. Comparative Summary Table
| System | Domain | Architectural Principle / Signal Feature |
|---|---|---|
| LEAP-net | Power grid ML | Additive latent perturbations for topology changes |
| LEAP (PIM-NoC) | Hardware/LLM Accel | Layer-wise partition over PIM/NoC; DSE + fine tiling |
| LEAP (3D Vision) | CV/NeRF | Pose-free neural volume w/ cross-attention lifting |
| LEAP (GNN-Graph ML) | Graph ML | Anchor augmentation + GNN for inductive new nodes |
| LEAP TEE | Trusted Mobile Exec | Stage-2 MMU sandboxing + dynamic resource mediation |
| LEAP (EPTA) | Pulsar Astronomy | Coherent summation of baseband from >5 telescopes |
| LEAP-VO | Visual Odometry | Inter-track temporal transformer, uncertainty pred. |
| LEAP-QC | Quantum Synthesis | Plateau-detected prefix bands + local re-synthesis |
| LEAP (Cheminformatics) | Molecule Scoring | Pre-trained/fine-tuned GPT-2 on route-depth |
9. Significance and Broader Impact
The recurrence of the LEAP acronym across several unrelated but technically rigorous architectures is notable for its emphasis on modularity, transferability, and efficient augmentation—be it in latent spaces, physical architectures, or graph topologies. Architectures such as LEAP-net (Donnot et al., 2019) and LEAP (GNN link prediction) (Samy et al., 5 Mar 2025) explicitly encode domain-specific perturbations and inductive biases, respectively, providing significant improvements in generalization or expressivity versus prior art.
The cross-domain utility of the architectural design patterns present in these instantiations of LEAP—modular feature perturbation, dynamic augmentation, and resource allocation—underscores their value as exemplars for future system and algorithm design in fields ranging from power systems and AI hardware to secure execution, scientific instrumentation, and chemical informatics.