Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hybrid STQGCN: Quantum Urban Forecasting

Updated 18 December 2025
  • The paper presents a novel hybrid framework integrating quantum circuits with classical graph and temporal convolutions to enhance prediction accuracy.
  • It employs a dual-branch architecture where the spatial branch fuses classical GCN and quantum circuits for high-dimensional feature extraction, and the temporal branch models sequential data.
  • Experimental results indicate H-STQGCN outperforms baselines in urban taxi forecasting, significantly reducing Euclidean distance and RMSE errors across diverse city datasets.

A Hybrid Spatio-Temporal Quantum Graph Convolutional Network (H-STQGCN) is an algorithmic framework that integrates quantum computing with classical deep learning for spatio-temporal prediction on graph-structured data, exemplified by the task of urban taxi destination forecasting. H-STQGCN consists of two architecturally distinct branches: a spatial branch leveraging both classical graph convolutional networks (GCN) and parameterized quantum circuits (quantum graph convolutional networks, QGCN), and a temporal branch comprising a classical temporal convolutional network (TCN). The approach is uniquely designed to extract high-dimensional spatial dependencies infeasible for purely classical models, facilitating robust and accurate predictions in complex urban networks (Zhang et al., 15 Dec 2025).

1. Hybrid Architecture Overview

H-STQGCN employs a dual-branch design. The spatial branch extracts node-level and global topological information by fusing classical GCN layers with quantum circuits, using a differentiable pooling strategy to merge large graphs into qubit-accessible subgraphs. The temporal branch models sequential and contextual dependencies via a dilated causal TCN, incorporating auxiliary metadata and point-of-interest (POI) features. Both branches converge through a fusion process yielding prediction logits over the urban grid, from which final outputs are computed as weighted averages of grid-center coordinates.

2. Spatial Branch: Classical and Quantum Modules

The spatial branch initiates with a classical GCN designed for local topology encoding. For a city road network represented as G=(V,E)G=(V,E), with NN grid cells and X(0)RN×DinX^{(0)} \in \mathbb{R}^{N \times D_{\mathrm{in}}}, normalized adjacency A^\hat A is computed after self-loop augmentation. Each layer updates node embeddings via

H(+1)=σ(A^H()W()),H(0)=X(0)H^{(\ell+1)} = \sigma\bigl(\hat A\,H^{(\ell)}\,W^{(\ell)}\bigr),\quad H^{(0)}=X^{(0)}

with ReLU activation and residual stabilization. Downsampling within the residual facilitates stable propagation across LGCNL_{\mathrm{GCN}} layers, yielding XGCNX_{\mathrm{GCN}}.

To restrict quantum circuit dimensionality to the number of available NISQ qubits, differentiable graph pooling learns a node-to-qubit assignment SRN×NqS\in\mathbb{R}^{N\times N_q} by softmaxing a GCN-parameterized scoring matrix. The graph is pooled to (Xp,Ap)(X_{\mathrm{p}},A_{\mathrm{p}}), serving as input to the QGCN.

QGCN encodes classical features onto NqN_q qubits using data re-uploading with RY(xi),RZ(xi)R_Y(x_i), R_Z(x_i) gates. The parameterized quantum circuit alternates between rotation layers: Urot()(Θ())=i=1NqRX(θi,1())RY(θi,2())RZ(θi,3())U_{\mathrm{rot}}^{(\ell)}(\Theta^{(\ell)}) = \bigotimes_{i=1}^{N_q} R_X(\theta_{i,1}^{(\ell)}) R_Y(\theta_{i,2}^{(\ell)}) R_Z(\theta_{i,3}^{(\ell)}) and adjacency-guided entanglement layers realized by controlled-RY gates weighted according to ApA_{\mathrm{p}}: Uent()(Φ(),Ap)=i<jCRY(ϕij()Ap(i,j))U_{\mathrm{ent}}^{(\ell)}(\Phi^{(\ell)},A_p) = \prod_{i<j} \mathrm{CRY}(\phi_{ij}^{(\ell)} \cdot A_p(i,j)) After parameterized quantum evolution, expectation values Zi\langle Z_i \rangle of each qubit yield z\mathbf z, which are fused with XpX_{\mathrm{p}} using a residual feedforward mapping.

Quantum pooling aggregates node-level features by mean, re-encodes them onto qubits, and processes with a deep, entangled quantum ansatz involving RZR_Z, RYR_Y, and CNOT gates. Measurement and postprocessing deliver the global spatial feature VglobalV_{\mathrm{global}}.

3. Temporal Branch and Contextual Fusion

The temporal branch processes multi-source contextual information, including grid embeddings, POI distribution (bag-of-categories, BOC), taxi identifiers, temporal encodings (hour, weekday, day type), and the global spatial representation VglobalV_{\mathrm{global}}. Each context is embedded and concatenated into a sequence FseqRP×LF_{\mathrm{seq}} \in \mathbb{R}^{P \times L}, L=4L=4. Residual, dilated 1D convolutional blocks with exponentially increasing dilation rates (d=21)(d_\ell=2^{\ell-1}) form the TCN, with each block defined by

H()=ReLU(Conv1D(H(1);w1,d))Conv1D(;w2,d)H^{(\ell)} = \mathrm{ReLU}(\mathrm{Conv1D}(H^{(\ell-1)}; w_1, d_\ell)) \longrightarrow \mathrm{Conv1D}(\cdot; w_2, d_\ell)

supplemented by dropout and residual connections. The output VseqV_{\mathrm{seq}} is the last-step hidden vector of the final block.

Fusion is achieved by projecting VseqV_{\mathrm{seq}} through a linear layer, obtaining logits over all grid cells: z=WfVseq+bf,pi=ezijezj\mathbf z = W_f V_{\mathrm{seq}} + b_f, \quad p_i = \frac{e^{z_i}}{\sum_j e^{z_j}} The predicted destination Y^\hat Y is computed as a probability-weighted sum of grid centers: Y^=i=1NgridpiCi\hat Y = \sum_{i=1}^{N_{\mathrm{grid}}} p_i C_i

4. Training Protocol and Optimization

The end-to-end network is trained with cross-entropy loss on discrete grid labels: L=i=1Ngridyilnpi\mathcal{L} = -\sum_{i=1}^{N_{\mathrm{grid}}} y_i \ln p_i Adam optimizer is used, with a learning rate of 1×1051 \times 10^{-5} and batch size 64. Gradients flow through both classical and quantum modules using the parameter-shift rule for quantum circuit differentiation. Early stopping is performed by monitoring validation loss on 15% of data (stratified by taxi ID), and evaluation is on a 20% held-out test split. Quantum circuits are simulated using PyTorch and PennyLane frameworks on classical hardware.

5. Experimental Evaluation and Results

H-STQGCN was evaluated on Porto (1.7M trips, 442 taxis, 115 m grid), San Francisco (464k trips, 536 taxis, 570 m grid), and Manhattan (647k trips, 600 taxis, 218 m grid) datasets. Preprocessing included geographic filtering, four-step input sequences, POI-BOC extraction, and standard data splits. Baselines comprised ARIMA, simple neural networks, MLP-SEQ, LSTM, LSTM(BOC), QLSTM, and ST-GCN. The performance metrics were Euclidean Distance Score (EDS) and Root Mean Square Error (RMSE) in kilometers.

Table: Test Prediction Errors (km)

Algorithm Porto EDS Porto RMSE SF EDS SF RMSE Manhattan EDS Manhattan RMSE
ARIMA 2.3885 2.7815 2.5356 3.0240 2.8684 3.2647
NN 2.3829 2.8120 2.4186 2.9362 2.7940 3.8976
MMLP-SEQ 2.2922 2.6945 2.3455 2.8074 3.6315 4.0646
LSTM 2.2700 2.6991 2.4156 2.9933 3.2498 3.8777
LSTM(BOC) 2.1813 2.6178 2.2969 2.6990 2.7178 3.2266
QLSTM 2.1140 2.4910 2.2213 2.6488 2.8629 3.2266
ST-GCN 2.0600 2.3902 2.1414 2.5504 2.9163 3.6777
H-STQGCN 2.0423 2.3134 2.1573 2.5915 2.6282 3.1608

Ablation studies demonstrated the contribution of both QGCN and BOC features, with H-STQGCN achieving the lowest EDS and RMSE on all datasets. Loss curves indicated rapid convergence within 10 epochs without signs of overfitting. An EDS reduction of 9.9% on Manhattan relative to the strongest baseline (ST-GCN) was observed.

6. Quantum-Enhanced Mechanism and Analysis

The QGCN component facilitates modeling of higher-order, non-local spatial dependencies by mapping pooled node features into an exponentially large Hilbert space, surpassing the representational capacity of classical GCNs. Differentiable pooling ensures that a limited number of qubits suffice for tractable processing of large graphs, compatible with the constraints of NISQ-era hardware. Parameter-shift gradient estimation enables seamless integration of quantum and classical parameter updates. However, there exists a trade-off between quantum circuit expressivity and hardware noise tolerance, dictated by qubit count and circuit depth. At present, the temporal branch remains classical due to quantum resource constraints; full quantization is not yet viable (Zhang et al., 15 Dec 2025). Scalability to larger graphs and deployment on real quantum hardware remain open research directions.

7. Limitations and Outlook

Current NISQ-era limitations necessitate careful architectural choices: shallow circuits, small qubit counts, and hybrid simulation. While H-STQGCN demonstrates performance gains in simulated settings, real-world deployment depends on advances in quantum hardware, error mitigation strategies, and more efficient quantum ansätze. The framework's potential generalizes to other spatio-temporal graph modeling tasks, subject to further empirical validation. Future research may consider dynamic pooling strategies and full-quantum temporal modeling, as well as hardware-tailored circuit designs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Hybrid Spatio-Temporal Quantum Graph Convolutional Network (H-STQGCN).