Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 69 tok/s
Gemini 2.5 Pro 58 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4.5 33 tok/s Pro
2000 character limit reached

Gaussian Process Discrete Hawkes Process (GP-DHP)

Updated 29 September 2025
  • GP-DHP is a nonparametric extension of discrete Hawkes processes that employs independent Gaussian process priors to model both the baseline and self-excitation, capturing complex trends and bursts.
  • The model uses a collapsed latent variable representation with FFT-based convolution to achieve near-linear-time inference and efficient uncertainty quantification.
  • Closed-form RKHS projections enable clear separation of baseline and excitation components, providing interpretable diagnostics such as the branching ratio for stability assessment.

The Gaussian Process Discrete Hawkes Process (GP-DHP) is a nonparametric, data-adaptive extension of the discrete-time Hawkes model that places independent Gaussian process (GP) priors on both the baseline and excitation components governing count-valued event series. By jointly modeling the background (exogenous) intensity and the self-excitation kernel with GPs, GP-DHP achieves flexible and interpretable representation of trends, seasonal variation, bursts, and complex self-excitation patterns, all while enabling efficient near-linear-time inference and principled uncertainty quantification (Brisley et al., 26 Sep 2025).

1. Motivation and Model Structure

Classical discrete-time Hawkes models, typically used for count time series {N(t)}t=1T\{N(t)\}_{t=1}^T on a regular lattice, represent the event intensity as a sum of a fixed baseline and a parametric excitation term, most commonly geometric or negative-binomial:

λ(t)=b(t)+d=1t1N(td)f(d)\lambda(t) = b(t) + \sum_{d=1}^{t-1} N(t-d) \cdot f(d)

where b(t)b(t) is a pre-specified baseline (constant, linear, or seasonal) and f(d)f(d) is a fixed-form excitation kernel. This rigid parameterization limits the capacity to represent complex dynamics such as evolving baselines, multimodal or delayed excitation, and slow decay or memory.

GP-DHP removes this bottleneck by imposing independent GP priors on b(t)b(t) and f(d)f(d), yielding

l(t)=b(t)+d=1t1N(td)f(d),N(t)Poisson(λ(t)),λ(t)=max{0,l(t)}.l(t) = b(t) + \sum_{d=1}^{t-1} N(t-d) f(d), \qquad N(t) \sim \mathrm{Poisson}(\lambda(t)),\quad \lambda(t) = \max\{0,\: l(t)\}.

The approach is tailored for situations where events are recorded as discrete counts (e.g., public health datasets, terrorism incidents, or surveillance counts) and parametric baselines/kernels are insufficient to capture latent structure.

2. Prior Specification for Baseline and Excitation

Both b(t)b(t) and f(d)f(d) are assigned GP priors with structured covariance kernels reflecting typical properties of real-world event series:

Baseline GP:

Kb(t,t)=σper2exp[2sin2(π(tt)/P)per2]+σlin2tt+ϵb2δttK_b(t, t') = \sigma^2_{\text{per}} \exp\left[ -\frac{2\sin^2 \left(\pi(t - t') / P\right)}{\ell^2_{\mathrm{per}}} \right] + \sigma^2_{\text{lin}} t t' + \epsilon_b^2 \delta_{tt'}

  • The periodic kernel enables recovery of seasonalities with period PP.
  • The linear term handles trends.
  • ϵb2δtt\epsilon_b^2 \delta_{tt'} is a diagonal nugget for numerical stability and noise accommodation.

Excitation GP:

Excitation decays over lags and is typically concentrated at short lags. To enforce both nonstationarity and heavy attenuation,

a(d)=σfexp(βd/2),g(d)=1exp(βd)βfa(d) = \sigma_f \exp(-\beta d/2),\qquad g(d) = \frac{1-\exp(-\beta d)}{\beta\ell_f}

Kf(d,d)=a(d)a(d)exp[12(g(d)g(d))2]+ϵf2δddK_f(d, d') = a(d)a(d')\exp\left[-\frac{1}{2}(g(d)-g(d'))^2\right] + \epsilon_f^2\delta_{dd'}

This combination (amplitude decay and input warping) ensures that the kernel is highly flexible at short lags (detailed modeling of rapid excitation), while the effect vanishes smoothly as dd\to\infty.

3. Inference via Collapsed Gaussian Process Representation

A central innovation is the “collapsed” latent variable representation. Instead of learning bb and ff separately, GP-DHP marginalizes over the independent priors, inducing a prior over the full latent intensity trajectory l=[l(1),...,l(T)]l = [l(1), ..., l(T)]^\top:

K=Kb+XKfXK = K_b + X K_f X^\top

where XX is a strict lower-triangular lagged-count design matrix with Xt,d=N(td)X_{t,d} = N(t-d) for d<td < t.

Maximum A Posteriori (MAP) Estimation: The latent trajectory ll is optimized with respect to the penalized log-posterior,

logp(lN)=t[N(t)logλ(t)λ(t)]12lK1l\log p(l|N) = \sum_t [N(t)\log \lambda(t) - \lambda(t)] - \frac{1}{2} l^\top K^{-1}l

subject to λ(t)=max{0,l(t)}\lambda(t) = \max\{0, l(t)\}. The nonnegativity constraint is imposed via rectification.

Efficient O(TlogT)(T \log T) computations are enabled by:

  • The convolutional structure of XX, permitting fast matrix-vector multiplications via FFT.
  • Using inducing-grid approximations for GP covariances, especially for the stationary parts of KfK_f.

Laplace approximation around the MAP solution can be used to quantify posterior uncertainty.

4. Closed-form Projection for Interpretability

After inference over the latent trajectory ll^*, practitioners require interpretable separation of b(t)b(t) and f(d)f(d). GP-DHP performs a closed-form RKHS projection (minimum-norm decomposition) from ll^* back onto the two function spaces, satisfying l=b+Xfl^* = b + X f:

b^=KbK1l,f^=KfXK1l\hat{b} = K_b K^{-1} l^*,\qquad \hat{f} = K_f X^\top K^{-1} l^*

This splitting is justified as the unique solution minimizing the joint RKHS norm, and provides direct, interpretable baselines and excitation curves compatible with the learned intensity.

Diagnostics such as the branching ratio (d=1f^(d)+\sum_{d=1}^\infty\hat{f}(d)_+) are available for stability assessment.

5. Empirical Evaluation: Simulation and Applications

In simulation studies, the model was subjected to scenarios with diverse excitation kernels (e.g., geometric, inverted parabolic, bimodal Gaussian mixture, power-law decay) and baselines (constant, linear, or periodic). GP-DHP recovered both types of structure, including nonstandard and multimodal patterns, with high fidelity.

Case studies include:

  • U.S. Terrorism Incidents: Daily event counts over 21 years. GP-DHP outperformed standard parametric baselines (constant, linear, seasonal with negative binomial kernel) in test log-likelihood, captured abrupt bursts and long stretches of inactivity, and yielded interpretable branching ratio estimates supporting process stability.
  • Cryptosporidiosis Outbreak Surveillance: Weekly count data. GP-DHP, while yielding modest predictive gains over classical baselines, provided a useful decomposition into slow-moving environmental/seasonal effects (baseline) and sharp, transient outbreaks (excitation).

6. Scalability and Real-world Suitability

GP-DHP achieves practical scalability:

  • The collapsed representation reduces inference and memory footprint from O(T2)\mathcal{O}(T^2) to O(TlogT)(T\log T).
  • FFT-based convolution and structured kernel interpolation enable efficient optimization even for series with TT up to the order of 10510^5.
  • Storage is linear in TT.

The closed-form projections, diagnostic tools, and robust empirical evaluations demonstrate that GP-DHP is suitable for large-scale event surveillance, social media diffusion analysis, epidemiology, and security informatics, particularly where domain experts demand interpretability, flexible recovery of nonstandard temporal phenomena, and operational efficiency.


GP-DHP extends the toolkit for discrete self-exciting modeling by providing a principled, scalable, and highly adaptive Gaussian process approach, enabling both sophisticated structure discovery and transparent process decomposition in real-world applications (Brisley et al., 26 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Gaussian Process Discrete Hawkes Process (GP-DHP).