DPNet: Doppler Planning Network
- DPNet is a model-based learning framework that integrates Doppler LiDAR, neural Kalman filtering, and MPC for precise, real-time dynamic obstacle tracking.
- The system employs D-KalmanNet, an adaptive neural Kalman filter that learns measurement gains to mitigate noise and accommodate rapid, non-constant-acceleration behavior.
- DPNet shows significant performance gains, with up to 12 dB NMSE improvement over traditional filters on both highway and urban benchmarks.
Doppler Planning Network (DPNet) is a model-based learning framework designed for real-time motion planning in highly dynamic environments by integrating Doppler LiDAR data for both obstacle tracking and ego-motion planning. DPNet addresses limitations of prior methods in handling rapid obstacle motion by leveraging Doppler-derived velocity information, a custom neural Kalman filtering architecture (D-KalmanNet), and a Doppler-tuned model predictive control (DT-MPC) module. DPNet attains both high-frequency and high-accuracy behavior with minimal data requirements, as demonstrated on simulated and real-world datasets (Zuo et al., 29 Nov 2025).
1. Doppler LiDAR Integration for Dynamic Obstacle Tracking
At the core of DPNet is the use of Doppler LiDAR, which augments traditional ranging measurements with instantaneous pointwise velocity estimates. For each obstacle , DPNet constructs discrete-time, Gaussian state-space models (GSSMs) that encode planar position, velocity, and acceleration using a constant-acceleration motion model. The 6-dimensional state at time is:
Here, is the central position, the heading, the speed, and the acceleration magnitude for obstacle . The transition matrix is structured to enforce constant-acceleration kinematics with a sampling interval , and process noise .
The observation vector contains planar position and velocity, derived from Doppler-fused LiDAR point clusters. Each Doppler reading from point is projected into the tracking plane and aggregated to yield the obstacle’s velocity.
2. D-KalmanNet: Neural Kalman Filtering with Learned Gain
DPNet introduces D-KalmanNet, a neural Kalman filter variant that replaces the analytic Kalman gain computation with a learned, adaptive gain matrix. For each obstacle and timestep, the filter executes:
Prediction:
Measurement Update (learned gain):
The RNN is instantiated as a Gated Recurrent Unit (GRU) with a 64-dimensional hidden state; its input is the concatenation of the predicted state (6D), predicted observation (4D), and measurement (4D), for a total 14-dimensional input. The GRU output is processed by an MLP (64→32→24) with ReLU activations, producing the gain.
This learned gain enables robust compensation for model–data mismatch, rapid, non-constant-acceleration behavior, and partial observability due to real-world noise.
3. Doppler-Based Velocity Rectification and Measurement Fusion
DPNet incorporates a Doppler-based velocity rectification stage to address the high variability of pointwise radial velocities. For each obstacle, LiDAR returns are clustered, each Doppler reading is projected onto the planar velocity components using known geometry, and a noise-reducing averaging is performed:
where indexes LiDAR points belonging to obstacle , and encodes the incidence angle. This fusion substantially reduces radial-velocity noise and facilitates more accurate downstream filtering.
The measurement model thus fuses position and planar velocity: with projecting the 6-dimensional state into the observable 4 dimensions.
4. Model Predictive Planning: Doppler-Tuned MPC
Using the real-time obstacle motion tracks generated by D-KalmanNet, DPNet employs a Doppler-tuned model predictive control (DT-MPC) module. The predicted obstacle trajectories serve as input to the planner, which dynamically tunes MPC parameters at runtime. This allows online adaptation to environmental volatility, enhancing the agent’s reactivity to rapid, unpredictable obstacle motion. The lightweight, data-efficient architecture enables maintaining high planning frequency under computation and memory constraints.
5. Training Methodology and Empirical Results
D-KalmanNet is trained on AevaScenes real-world Doppler LiDAR sequences with ground-truth trajectories. For each training minibatch covering a window of future steps, the loss minimizes position (and optionally velocity) prediction errors:
Where is an regularizer on the gain. Adam is used for optimization (learning rate , weight decay ), with training for 2000 epochs, batch size of 32, gradient clipping at norm 1.0, and early stopping based on validation MSE.
On the AevaScenes dataset (10 Hz, ), D-KalmanNet achieves dB NMSE on highway sequences and dB NMSE in city, representing approximately 12 dB improvement over Doppler-aided Kalman filter baselines. The system maintains superiority at lower observation rates and longer prediction horizons. Per-step NMSE reveals especially large performance gaps for near-term forecasts (Step 1 NMSE: dB for D-KalmanNet vs dB for a baseline KF in city). Inference runs at 100 Hz for a single obstacle and above 15 Hz for 10 obstacles on Jetson Orin NX hardware (107 MB GPU memory).
6. Architectural and Algorithmic Innovations
DPNet advances standard neural Kalman filtering by tightly integrating Doppler-specific rectification, partial-observability handling, and a high-dimensional learned gain. Notable departures from prior filtering strategies include:
- Doppler-based aggregation of noisy radial velocities into a single, less noisy planar velocity per obstacle, enabling robust and accurate measurement updates.
- Explicit augmentation of the state-space to encompass heading uncertainty, velocity, and acceleration (rather than pure position-velocity).
- RNN-based learning of the Kalman gain , which adapts to rapid, non-constant-acceleration behaviors that lie outside model assumptions.
- Lightweight design supporting end-to-end training and real-time inference on embedded platforms.
This combination enables DPNet to outperform fixed-model Kalman filtering and other learning-based baselines in both static and rapidly evolving environments (Zuo et al., 29 Nov 2025).
7. Benchmarking and Broader Impact
DPNet’s real-time tracking and planning capabilities are substantiated by extensive benchmarks on simulation and real-world data. Empirical superiority is established not just in low-level trajectory following but in system-level planning tasks involving rapid, unpredictable dynamic obstacles. The architecture demonstrates scalability to multiple obstacles with modest computational footprint and improved generalization over task-specific learning approaches.
A plausible implication is that DPNet, by harmonizing physically grounded motion models with learnable adaptation from high-framerate Doppler signals, forms a blueprint for next-generation autonomous agents navigating complex, dynamic scenes. The modular design is amenable to further extension, such as incorporating richer sensor modalities or more expressive control policies.