Infinite Latent Events Model (ILEM)
- Infinite Latent Events Model (ILEM) is a nonparametric hierarchical Bayesian framework that infers an infinite set of latent binary events and their causal interdependencies in time series data.
- It leverages a hierarchy of Dirichlet processes combined with a noisy-OR transition function to dynamically build infinite-dimensional Bayesian networks without predefining the number of events.
- Empirical results demonstrate ILEM's effectiveness in applications such as sound factorization, network topology identification, and video game analysis, showcasing its scalability and precise causal inference.
The Infinite Latent Events Model (ILEM) is a nonparametric hierarchical Bayesian framework developed for structure discovery in discrete timeseries data. It allows inference of a countably infinite set of latent binary events, their activations over time, and their causal interdependencies. By leveraging a hierarchy of Dirichlet processes (DPs) and a noisy-OR transition function, ILEM constructs infinite-dimensional dynamic Bayesian networks without requiring prespecification of the number of latent factors or causal links. This formulation facilitates principled structure learning in settings where both the event set and their interactions are unknown, with demonstrated efficacy in domains such as sound factorization, network topology identification, and complex video game environments (Wingate et al., 2012).
1. Observed Data and Latent Variable Representation
ILEM models time-series observations , with each (or for binary images), as resulting from the activation of an infinite set of latent events. These activations are expressed as binary vectors , denoting which latent events are active at time . Collectively, the latent event matrix provides a temporal map of event activations. In parallel, actual cause counts are recorded, indicating the number of times event at time induces event at time 0. This explicit representation supports direct modeling of temporally indexed causal structure.
2. Nonparametric Priors and Generative Process
ILEM imposes nonparametric priors on event occurrence and transitions using a hierarchical DP construction. The event-feature matrix 1 (equivalent to 2) can alternatively be viewed under an Indian Buffet Process prior. In stick-breaking notation, the global event palette is defined by 3. For each event 4 (including a distinguished "background" event 5), transition probabilities 6 specify how parent 7 generates children. A separate background DP 8 produces activations from an always-on source.
The noisy-OR transition function emerges by marginalizing out 9:
0
where 1. Thus, the probability of event 2 firing at 3 is "or-combined" over all parents active at 4.
The generative process proceeds by sampling children from active parents via Poisson(5), children from background via Poisson(6), and assigning activations via child cause counts. Each parent 7's child is selected probabilistically: existing children are favored in proportion to their prior cause counts, while innovations allow the dynamic introduction of new events through DP mechanisms.
Observation likelihood is typically modeled using a linear-Gaussian mapping:
8
with a matrix-Gaussian prior over 9. Marginalization over 0 yields a closed-form expression for 1.
3. Causal Link Structure
ILEM eschews a fixed adjacency matrix, instead maintaining cause counts 2 at each timepoint. Parent event 3 tallies total causes 4, guiding subsequent child selection via the Chinese Restaurant Process (CRP) predictive rule:
- 5
- 6
If a novel child 7 is selected, global popularity 8 determines whether an existing event across all parents is reused, or a genuinely new event is introduced (probability proportional to 9). This dynamic causal link construction yields a sparse, emergent network---its size and topology remain unconstrained a priori and adapt to the data.
4. Bayesian Inference and Algorithmic Details
Posterior inference in ILEM is conducted via Markov Chain Monte Carlo (MCMC), specifically collapsed Gibbs sampling over 0 and 1 while marginalizing 2 and 3. CRP likelihoods and Poisson distributions govern conditional updates. For scalability, 4 may be binarized to record solely the presence or absence of a link, without multiplicity.
Metropolis-Hastings moves refine mixing:
- Event-rewrite proposals: relabel an active event, updating all associated 5 entries.
- Parent-swap proposals: reassign parentage from 6 to 7 for an event.
- Annealing the temperature during burn-in avoids poor local minima.
Likelihood computations leverage rank-one updates for 8 and its inverse/determinant under the linear-Gaussian model. Initialization can employ nonnegative matrix factorization (NMF). Only a finite set of events is active at any finite 9 due to finite expected Poisson draws, with slice sampling or truncations bounding 0 for implementation.
5. Scalability and Infinite-Dimensional Analysis
ILEM’s framework supports a countably infinite event space. In practice, only a finite number 1 of events are realized over any finite dataset, with new events introduced via DP innovation at parent draws (2) and globally from the base measure (3). The hierarchical DP architecture and Poisson child draws ensure computational tractability. For exact inference, slice-sampling (Walker, 2007) allows adaptive control of the active event set's cardinality.
6. Empirical Demonstrations and Application Domains
ILEM has been empirically validated across several domains:
| Domain | Dataset Description | ILEM Outputs |
|---|---|---|
| Causal Soundscape | 52 time-steps, 8 kHz sound clips (4), events: frog, cricket, elephant | 510 prototypes, directed causal links |
| Space Invaders | 400 frames, 15615 binary images (alien, turret, bullet, explosion) | Prototypes match sprites, causal chains inferred |
| Network Topology ("SysAdmin") | 400 time-steps, 7 machines (crash patterns, unknown sparse graph) | High link recovery rates (12/12, 19/20, 40/42) |
In the network topology context, ILEM outperformed one-step co-occurrence statistics, inferred hidden nodes such as "rogue" machines, and scaled to 8 nodes with nearly perfect link recovery on sparse random graphs.
7. Strengths, Limitations, and Prospective Extensions
ILEM simultaneously learns the cardinality and sequence of latent events, the factored causal link structure, and the observation model prototypes. Its nonparametric Bayesian foundation obviates manual specification of event dimensionality and ad hoc regularization. Noisy-OR semantics provide interpretable, excitatory causal relationships.
Identified limitations include potential slow MCMC mixing for large 9 and 0, reliance on excitatory (OR-like) links, sensitivity to hyperparameters (1, 2, 3), and fixed observation models. Extensions under active consideration are:
- Semi-Markov/duration modeling for event persistence.
- General conditional-probability tables beyond noisy-OR, including inhibitory effects.
- Variational or slice-sampling inference for improved scalability.
- Hierarchical/deep architectures allowing nested sub-events.
- Nonlinear or deep network observation models (e.g., variational autoencoders).
The ILEM unifies nonparametric structure discovery for timeseries causal networks, adaptively learning sparse, interpretable event-to-event causality and event prototypes without requiring explicit enumeration of latent dimensions, and achieves state-of-the-art causal inference in multiple domains (Wingate et al., 2012).