Space Weather Prediction Testbed
- Space Weather Prediction Testbed (SWPT) is an operational exercise that evaluates real-time SEP forecasting via advanced physics-based models.
- The methodology integrates adaptive grid strategies and dual-run protocols to balance rapid predictions with high-fidelity outputs critical for human spaceflight.
- Operational implications include interagency collaborations and workflow integration with agencies like NOAA and NASA, ensuring timely support for missions such as Artemis II.
The Space Weather Prediction Testbed (SWPT) is a comprehensive operational exercise designed to critically evaluate the real-time forecasting capabilities of advanced physics-based space weather models in a near-operational environment. In May 2025, the SWPT focused on the SOlar wind with FIeld lines and Energetic particles (SOFIE) model developed by the CLEAR Space Weather Center of Excellence, testing its ability to predict solar energetic particle (SEP) fluxes in support of human spaceflight, particularly for Artemis II mission scenarios. The exercise involved live, on-site interagency collaboration at the National Oceanic and Atmospheric Administration’s Space Weather Prediction Center (NOAA/SWPC), with emphasis on computational performance, model accuracy, workflow integration, and forecasting utility in simulated real-time conditions (Liu et al., 12 Nov 2025).
1. Objectives and Organizational Structure
The 2025 SWPT exercise was conducted at NOAA/SWPC (Boulder, CO) with coordination from multiple agencies and research centers:
- Organizers: NOAA/SWPC, NASA’s Community Coordinated Modeling Center (CCMC), Space Radiation Analysis Group (SRAG), Moon-to-Mars Space Weather Analysis Office (M2M SWAO), and the CLEAR Center.
- Participants: SWPC forecasters (flare detection, CME analysis), M2M SWAO analysts (CME parameter extraction), CCMC (model support), SRAG (dose response), CLEAR team (SOFIE operations), and external observers.
Key objectives were:
- Assess SOFIE’s ability to deliver real-time or faster predictions of SEP fluxes relevant to astronaut safety in Orion-class vehicles.
- Evaluate both technical metrics (latency, computational efficiency, predictive accuracy) and operational integration aspects (fit within forecaster and console-operator workflows, analyst coordination).
- Test response using two historical, well-characterized SEP events:
- 10 September 2017, X8.2 flare and fast CME (plane-of-sky speed ∼2650 km/s).
- 4 November 2001, X1.0 flare and fast-halo CME (∼1925 km/s).
Practice simulated astronaut-dose computation and response protocols for human spaceflight support.
2. SOFIE Model Architecture and Governing Physics
SOFIE is implemented within the Space Weather Modeling Framework (SWMF) with three primary modules:
- AWSoM-R: Stream-aligned magnetohydrodynamics (MHD), providing dynamic ambient solar wind solutions.
- EEGGL: Gibson–Low flux-rope generator for CME introduction.
- M-FLAMPA: Multiple Field-Line Advection Model for Particle Acceleration, solving the focused transport equation for SEP injection and propagation.
2.1 MHD Physics (AWSoM-R)
- Mass continuity:
- Momentum equation (includes Lorentz force, gravity):
- Energy equation incorporates Alfvén-wave heating and thermal conduction.
2.2 SEP Transport and Acceleration (M-FLAMPA)
- Focused transport equation on magnetic field lines:
where is field line distance, is particle pitch cosine, the focusing length, and the shock-driven injection.
- Injection at CME shock parameterized as:
with as a scaling factor, the spectral index, and a reference momentum.
3. Simulation Domain and Grid Optimization
The spatial domain uses a two-part approach:
- Spherical shell (SC): 1.1–24
- Cartesian heliosphere (IH): 20–650 (with overlap buffer)
To manage computational cost without degrading predictive performance, the grid strategy employed a coarse background mesh with block-adaptive mesh refinement (AMR) concentrated on:
- The heliospheric current sheet (HCS)
- Earth-directed cones (15° of the Sun-Earth line)
- CME propagation/shock regions
Three domain setups were evaluated:
- Setup 1: Fine default grid (angular –) with AMR.
- Setup 2: Coarsened background grid by ($1.4$– angular cells), retaining AMR.
- Setup 3: Same coarse background as Setup 2, but with default (fine) resolution in AMR-refined regions.
The corresponding grid cost scaling leverages the AMR approach:
where is background spacing, fine spacing, and the refined volume fraction. Halving reduces cost by for the SC.
4. Technical Performance and Accuracy Results
SOFIE simulations were run on the NASA Pleiades supercomputer with 1,000 CPU cores (25 Cascade Lake nodes).
4.1 Wall-Clock vs. Simulated Time
- Setup 2 completed 4-day SEP predictions in 4.86 hours; it achieved real-time catchup within 1.19 hours after CME input (latency ≈0.19 hr).
- Setup 3 required 18.6 hours to complete the run (catchup in 4.11 hr).
- Setup 1 (highest resolution) took 21.1 hours (catchup in 10.87 hr).
- All configurations successfully delivered simulation results significantly faster than real event duration for Setup 2, which is critical for operational forecasting.
4.2 Predictive Accuracy
Accuracy Metrics (4 November 2001 Event):
| Energy Channel | Setup | Spearman ρ | % within 10× obs. | Median log-error | Total Runtime (hr) |
|---|---|---|---|---|---|
| >10 MeV | 1 | 0.93 | 99.9% | −0.18 | 21.1 |
| 2 | 0.84 | 92.7% | +0.05 | 4.86 | |
| 3 | 0.87 | 84.7% | −0.04 | 18.6 | |
| >100 MeV | 1 | 0.92 | 99.6% | +0.34 | 21.1 |
| 2 | 0.56 | 92.1% | +0.13 | 4.86 | |
| 3 | 0.57 | 97.8% | +0.21 | 18.6 |
Setup 1 maximizes correlation and minimizes bias, but with greater latency. Setup 2 trades modest loss in accuracy (with order-of-magnitude agreement ) for faster wall-clock performance, meeting operational needs for rapid guidance.
A plausible implication is that adaptive grid strategies allow physics-based SEP models to meet both latency and accuracy requirements, with Setup 2 providing timely forecaster support and Setup 1 providing refined guidance.
5. Integration with Operational Workflows and Model Refinement
Explicit interactive feedback was solicited throughout the exercise with operational forecasters, console operators, and analysts. Key workflow adaptations included:
- “Fast-first, accurate-later” dual-run protocol: Launch Setup 2 immediately upon CME detection for rapid, order-of-magnitude predictions (within 5 hr). Setup 1 runs in parallel, providing higher-fidelity forecasts for mission planning (completed in 21 hr).
- Hybrid grid proposal (Setup 3): Coarse background grid but full-resolution AMR in critical regions (HCS and CME-Earth cone) to balance time-to-solution and forecast fidelity.
- Daily precomputation of the ambient solar wind background; only CME launch triggers a SOFIE restart, minimizing operational latency (1 hr).
This collaborative model-operations loop strengthened procedural realism and responsiveness, aligning advanced model outputs with practical human spaceflight support needs.
6. Implications for Forecasting, Future Exercises, and Model Evolution
The SWPT exercise demonstrated that physics-based SEP prediction models, traditionally seen as too computationally expensive for operations, can achieve real-time (even faster than real-time) and accurate forecasts when optimized with adaptive gridding and workflow integration.
Significant implications include:
- “First-fast, then fine” protocol ensures rapid situational awareness and subsequent high-fidelity mission support, mitigating operational risk for human exploration (e.g., Artemis II).
- Model physics refinements are encouraged, such as replacing ad-hoc SEP injection with physically motivated suprathermal seed populations and empirically tuned mean free path estimates.
- Future SWPTs will likely incorporate ensemble (multi-event, multi-model) runs, include more realistic background solar wind (with prior CMEs), and enable validation with multi-spacecraft data (e.g., Parker Solar Probe, Solar Orbiter).
- Pipeline integration: seamless transitions from automated CME detection → real-time SOFIE simulation → product dissemination to SWPC/SRAG → iterative feedback to modelers.
A plausible implication is that operational adoption of these approaches will set a precedent for the broader use of computationally intensive, physically rigorous space weather models in mission-critical forecasting pipelines.