Multi-Commodity 1–1 PDSTSP: Deep Learning & Metaheuristics
- The paper introduces a hybrid method combining Transformer neural network policies with multi-start LNS for optimal revenue in m1-PDSTSP.
- It details a rigorous mathematical formulation and benchmarks showing sub-second inference and reduced optimality gaps in dynamic freight routing.
- The approach generalizes selective TSP and PDP variants, enabling efficient and adaptable routing in high-frequency online freight exchange systems.
The multi-commodity one-to-one pickup-and-delivery selective traveling salesperson problem (m1-PDSTSP) is a combinatorial optimization problem central to online freight exchange systems, where the aim is real-time, revenue-maximizing bundling of multi-commodity transportation requests. The problem requires determining a route for a single vehicle subject to resource and precedence constraints, selectively pairing a subset of pickup and delivery nodes to maximize total revenue. The m1-PDSTSP generalizes a range of selective TSP and pickup-and-delivery problem (PDP) variants and represents a challenging instance of constrained vehicle routing under stringent computational latency requirements (Zhang et al., 12 Dec 2025).
1. Mathematical Formulation
The m1-PDSTSP is defined on a complete undirected graph , with the following elements:
- Pickup nodes , Delivery nodes ; each delivery node is paired with pickup .
- Depots: start ($0$) and end ($2n+1$).
- Requests with demand and revenue .
- Vehicle capacity and maximum route-length .
- Travel costs obey the triangle inequality.
Decision variables:
- : 1 if arc is used.
- : cumulative distance upon arrival at node .
- : load departing node .
Objective: Maximize total revenue from served requests:
Constraints:
- No self-loops: .
- Single departure/arrival: , .
- Flow conservation: .
- At most one visit per node: ; .
- Pairing (Selectivity): .
- Precedence: If , then .
- Route-length: , , and if then .
- Capacity: , , and if ,
This formulation enforces feasible vehicle tours that select and pair pickup–delivery requests subject to stringent resource, route, and precedence constraints for maximal realized revenue.
2. Hybrid Algorithmic Pipeline: Deep Learning and Metaheuristics
To address the computational and combinatorial complexity, a hybrid pipeline couples a Transformer Neural Network-based constructive policy with a Multi-Start Large Neighborhood Search (MSLNS) metaheuristic executed within a rolling-horizon framework. This pipeline is architected for sub-second inference on market snapshots, a common requirement in online freight exchanges (Zhang et al., 12 Dec 2025).
A. Transformer-Based Constructive Policy
- Encoder: Node-wise inputs—geospatial coordinates , normalized demand , normalized revenue , and depot-type features—are linearly projected and processed by layers of multi-head self-attention, with batch normalization.
- Decoder (Auto-regressive): The decoder operates over the current partial route, vehicle state (remaining capacity, route-length), context vector, and applies masked attention to produce logits for valid next-node selections.
- Feasibility Masking: Dynamically excludes visited nodes, deliveries before pickups, infeasible pickups (exceed capacity, route-length bounds).
- Training: Uses POMO (Policy Optimization with Multiple Optima), employing multi-start policy-gradient REINFORCE with distinct start pickups and shared baseline, Adam optimization, and penalty for exceeding route length. No teacher labels are required.
- Inference: Single greedy rollout achieves complexity, generating feasible solutions in milliseconds for up to 122.
B. Multi-Start Large Neighborhood Search (MSLNS)
- Initialization: diverse seed routes are generated from Transformer-based POMO rollouts.
- Destroy Operator: Tracks request frequency within a beam of size , performing softmax-biased sampling of requests (progressively growing) for destruction, avoiding repeated selections via memory .
- Repair Operator: Greedy insertion reconstructs solutions, optimally reintegrating pickup–delivery pairs while maintaining capacity, route-length, and precedence feasibility.
- Local Improvement: Applies 2-Opt for further refinement.
- Beam Update: Combines prior and new candidates, deduplicates by served-set, and retains the top solutions by revenue.
- Termination: Iterates until the time budget is exhausted, returning the route with maximal revenue.
This hybrid approach exploits high-quality, learning-derived seeds to position search close to attraction basins in the solution space, reducing the necessary LNS neighborhood size for optimal improvement.
3. Empirical Performance and Benchmarking
Empirical evaluation spans benchmark problem sizes ; each instance features randomized vehicle capacities and route-length scaled to depot-to-depot baselines. Revenue structures encompass Distance, Ton-Distance, Uniform, and Constant regimes.
Baselines:
- Heuristic: Greedy Search (GS), Multi-Start Greedy (MSG), Hill-Climbing (HC), 1-destroy Best Improvement LNS (BI-LNS), Adaptive LNS (ALNS), Simulated Annealing (SA).
- Neural: Attention Model (AM), POMO (single, multi-start, beam search, SGBS).
- Hybrid: AM+HC, AM+BNS, POMO+HC, POMO+BNS, POMO+MSLNS.
- Exact: Gurobi (small only).
Metrics:
- Average total revenue (higher is better).
- Optimality gap .
- Winning rate: Proportion of instances achieving best-known solution.
- Runtime: Sub-second for constructor, seconds–minutes for full pipeline.
Key Observations:
- POMO greedy alone outperforms classical heuristics on both quality and running time for .
- Augmenting AM/POMO with HC or BI-LNS significantly reduces optimality gaps by 5–10%.
- POMO+MSLNS achieves gap and winning rate for all tested within budget.
- Extended POMO+MSLNS (unlimited runtime) closes gaps to and win rate for , comparable to Gurobi.
- Under strict sub-second constraints, AM+HC and POMO single-start provide the fastest high-quality solutions.
- These performance trends persist across diverse revenue settings, with hybrid methods consistently achieving superior results (Zhang et al., 12 Dec 2025).
4. Rolling-Horizon Market Integration
The rolling-horizon framework accommodates dynamic market operation in online freight exchanges:
- The marketplace state is “frozen” at intervals to produce static snapshots.
- Each snapshot triggers m1-PDSTSP resolution within a sub-second computational budget.
- Bundles are dispatched to carriers immediately post-solution.
- The POMO+MSLNS pipeline is deployed independently on these rolling snapshots, ensuring low-latency, robust incremental optimization.
A plausible implication is that such a design supports near-continuous reoptimization in high-frequency environments without excessive computational overhead.
5. Generalizations and Broader Applicability
The m1-PDSTSP generalizes a wide range of selective TSP and pickup-and-delivery variants, as summarized in the original variant taxonomy (reference therein). Its structural and methodological solutions are extensible:
- The Transformer-based constructive policy and multi-start LNS schema are adaptable to selective routing problems incorporating capacity, precedence, and even time windows.
- The key insight is that learned seeds from a deep neural network constructor concentrate search in high-value basins, permitting smaller LNS neighborhoods.
- This approach demonstrates, for the first time, that deep neural network-generated solutions reliably provide effective starting points for improvement metaheuristics across selective pickup-and-delivery problems.
Potential extensions include explicit modeling of time windows, multi-vehicle routing, dynamic reoptimization, and distributed LNS techniques for scaling to very large market snapshots (Zhang et al., 12 Dec 2025).