DDS & LET Middleware
- DDS+LET middleware is a deterministic, real-time communication system that integrates decentralized DDS publish–subscribe messaging with Logical Execution Time for reproducible operations.
- The four-layer architecture spans from high-level controllers to low-level actuators, ensuring synchronized sensor fusion, trajectory planning, and actuation across heterogeneous nodes.
- Empirical results show sub-100 ms latency and low jitter, confirming the middleware's hard real-time guarantees and scalability under dynamic network conditions.
Middleware systems integrating Data Distribution Service (DDS) with Logical Execution Time (LET) present a paradigm for achieving deterministic, real-time communication in distributed cyber-physical environments. In the context of the Cyber-Physical Mobility Lab (CPM Lab), this architectural approach underpins a four-layered vehicular control stack, providing both horizontally scalable communications and globally synchronized execution across heterogeneous nodes. The design merges DDS's decentralized, low-latency publish–subscribe messaging and dynamic discovery with LET semantics, producing a system capable of bit-identical experiment reproducibility and hard real-time guarantees under dynamic network conditions (Kloock et al., 2020).
1. Four-Layer Middleware Architecture
The CPM Lab’s control stack is organized into four logical layers:
- High-Level Controllers (HLC): Executed on Intel NUC hosts, responsible for centralized trajectory generation and networked decision logic.
- DDS-Based Middleware: Deployed both on NUCs and per-vehicle Raspberry Pis, this layer ensures message transport, topic-based discovery, time synchronization, and LET enforcement.
- Mid-Level Controllers (MLC): Resident on each vehicle’s Raspberry Pi, these modules fuse sensor data, generate local decisions, and translate trajectories to control primitives.
- Low-Level Controllers (LLC): Implemented on ATmega2560 microcontrollers, these units effectuate direct actuation via torque and steering commands.
All controllers interact via DDS Topics. Example topics include "FusedPose" (sensor-fusion outputs), "Trajectory[n]" (HLC-to-MLC setpoints), "ControlCmd[n]" (MLC-to-LLC actuation primitives), and "Heartbeat" (clock sync, liveness). The system enforces fine-grained QoS profiles per topic—for example, control streams employ RELIABLE reliability with DEADLINE matching the controller period and small LATENCY_BUDGETs (e.g., 5 ms) (Kloock et al., 2020).
2. Dynamic Participation through DDS
DDS’s intrinsic participant discovery supports seamless adaptation to dynamic memberships. When an MLC—physical or simulated—joins the DDS Domain, it:
- Announces its presence,
- Matches existing readers/writers by topic and datatype,
- Begins participating immediately, without static configuration.
For HLCs, vehicle presence is inferred automatically by querying matched DataReaders. This enables the number of vehicles to vary transparently at runtime. User-level control code remains agnostic to these changes; logic remains invariant to additions or removals of vehicles.
Table: DDS Topic Mappings and Roles
| Topic | Publisher | Subscriber |
|---|---|---|
| FusedPose | MLC (per vehicle) | HLC |
| Trajectory[n] | HLC | MLC (vehicle ) |
| ControlCmd[n] | MLC | LLC (vehicle ) |
| Heartbeat | All nodes | All nodes |
3. Logical Execution Time Enforcement
A periodic global time base underpins LET enforcement. Heartbeat messages exchanged at 100 Hz synchronize local clocks across all nodes. LET semantics are built atop this foundation:
- Logical Window: For each cycle , the window , with and as the LET period (e.g., 100 ms).
- Physical to Logical Time Mapping:
- Scheduling Constraint: For each periodic task , worst-case execution time 0 must satisfy 1.
- Deliver-At-Deadline Semantics: Inputs are snapshot at window start 2, tasks execute in parallel, outputs are held until 3, and finally published atomically.
This distributed scheduling approach induces the illusion of instantaneous computation at the logical deadline. As a result, executions are deterministic, and repeated experimental runs yield bit-identical outputs, a crucial property for reproducibility in distributed cyber-physical systems.
4. Middleware Driver Algorithms and Configuration
LET-enforced DDS operation on each node follows a uniform pattern:
6
LET period 4 and DDS parameters are distributed through configuration files (JSON) and DDS XML QoS profiles. The following illustrates configuration snippets:
7
8
5. Empirical Performance and Determinism
Stress-testing CPM-Lab’s DDS+LET middleware with 18 vehicles at 5 ms (using pure platoon controllers) demonstrated:
- Worst-case end-to-end latency (HLC→LLC→HLC): 92 ms
- One-sigma jitter on 100 ms tick: 1.3 ms across all nodes
- No dropped windows across sustained 30-minute operation
These observations confirm that the architecture reliably delivers hard-real-time, deterministic performance and supports scalable distributed experimentation (Kloock et al., 2020).
6. Significance and Research Context
The CPM-Lab platform operationalizes a Giotto-style LET runtime alongside DDS’s decentralized publish–subscribe middleware. This fusion supports:
- Arbitrary scaling of real/simulated vehicles without requiring code changes,
- Deterministic data exchanges of control and sensor topics,
- Infrastructure-independence between simulated and physical deployments.
A plausible implication is that this methodology generalizes to other distributed cyber-physical systems requiring deterministic, real-time interactions with dynamic membership. The CPM-Lab provides an open research environment for exploring multi-agent control, trajectory planning, and networked decision-making under controlled, reproducible conditions (Kloock et al., 2020).