Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Representation Learning for Distributional Perturbation Extrapolation (2504.18522v1)

Published 25 Apr 2025 in stat.ML and cs.LG

Abstract: We consider the problem of modelling the effects of unseen perturbations such as gene knockdowns or drug combinations on low-level measurements such as RNA sequencing data. Specifically, given data collected under some perturbations, we aim to predict the distribution of measurements for new perturbations. To address this challenging extrapolation task, we posit that perturbations act additively in a suitable, unknown embedding space. More precisely, we formulate the generative process underlying the observed data as a latent variable model, in which perturbations amount to mean shifts in latent space and can be combined additively. Unlike previous work, we prove that, given sufficiently diverse training perturbations, the representation and perturbation effects are identifiable up to affine transformation, and use this to characterize the class of unseen perturbations for which we obtain extrapolation guarantees. To estimate the model from data, we propose a new method, the perturbation distribution autoencoder (PDAE), which is trained by maximising the distributional similarity between true and predicted perturbation distributions. The trained model can then be used to predict previously unseen perturbation distributions. Empirical evidence suggests that PDAE compares favourably to existing methods and baselines at predicting the effects of unseen perturbations.

Summary

  • The paper proposes a novel latent variable framework with the Perturbation Distribution Autoencoder (PDAE) that extrapolates distributional responses under unseen perturbations.
  • It leverages additive latent shifts and identifiability theory to guarantee correct extrapolation when training perturbations are sufficiently diverse.
  • Experimental results on synthetic data validate superior in-distribution performance while highlighting challenges in out-of-distribution decoder generalization.

The paper "Representation Learning for Distributional Perturbation Extrapolation" (2504.18522) tackles the problem of predicting the distribution of observations (like RNA sequencing data) under unseen combinations of perturbations (like gene knockdowns or drug treatments). This is a challenging extrapolation task, particularly relevant in fields like single-cell biology where exhaustive experimentation is infeasible. The authors propose a principled approach based on a latent variable model and a novel training method, the Perturbation Distribution Autoencoder (PDAE).

Problem Formulation

The core problem is framed as a distributional regression task: learning a mapping from a perturbation label vector lRKl \in \mathbb{R}^K to the distribution of observations PXlP_{X|l}. The available data consists of datasets De=((xe,i)i=1Ne,le)D_e = ((x_{e,i})_{i=1}^{N_e}, l_e) for M+1M+1 known perturbation conditions, where xe,ix_{e,i} are i.i.d. samples from PXleP_{X|l_e}. The goal is to predict PXltestP_{X|l_{test}} for ltest{l0,,lM}l_{test} \notin \{l_0, \dots, l_M\} without any data from ltestl_{test}.

Proposed Generative Model

The paper posits a generative process where perturbations cause additive mean shifts in a dZd_Z-dimensional latent space. The process for an observation Xe,iX_{e,i} under perturbation lel_e is:

  1. Sample a basal latent state Zbasee,iPZZ^base_{e,i} \sim P_Z.
  2. Compute the perturbed latent state Zperte,i=Zbasee,i+WleZ^pert_{e,i} = Z^base_{e,i} + W l_e, where WRdZ×KW \in \mathbb{R}^{d_Z \times K} is a perturbation matrix encoding the effects of elementary perturbations.
  3. Sample noise ϵe,iQϵ\epsilon_{e,i} \sim Q_\epsilon.
  4. Generate the observation Xe,i=f(Zperte,i,ϵe,i)X_{e,i} = f(Z^pert_{e,i}, \epsilon_{e,i}) via a stochastic mixing function (decoder) f:RdZ×RdϵRdXf: \mathbb{R}^{d_Z} \times \mathbb{R}^{d_\epsilon} \to \mathbb{R}^{d_X}.

This model assumes that the effect of combining perturbations is simply the sum of their individual effects in the latent space, a form of compositional structure.

Theoretical Results: Identifiability and Extrapolation

Under the assumption of a deterministic and invertible decoder (f:RdZRdXf: \mathbb{R}^{d_Z} \to \mathbb{R}^{d_X}), Gaussian basal latents (PZP_Z), and sufficient diversity in the training perturbations (specifically, that the matrix representing the relative latent shifts $\WbL$ has full row rank dZd_Z), the paper proves:

  1. Affine Identifiability: The latent representation (via the decoder ff) and the relative perturbation effects (captured by $\WbL$) are identifiable up to an affine transformation. This means that different model parameters (f,W,PZ,Qϵ)(f, W, P_Z, Q_\epsilon) can induce the same observed distributions PXleP_{X|l_e} only if they are related by a specific affine mapping in the latent space.
  2. Extrapolation Guarantees: This identifiability implies that the distribution PXltestP_{X|l_{test}} for an unseen perturbation ltestl_{test} is uniquely determined if the relative perturbation vector (ltestl0)(l_{test}-l_0) lies within the linear span of the relative training perturbation vectors {lel0}e[M]\{l_e-l_0\}_{e \in [M]}. This provides a theoretical basis for predicting distributions for unseen linear combinations of training perturbations.

The practical implication is that if the true data generating process follows this structure, and the training data satisfies the diversity condition, a model capable of recovering this structure should be able to generalize reliably to unseen, but compositionally related, perturbations.

Perturbation Distribution Autoencoder (PDAE) Method

To estimate this model and perform predictions, the authors propose the PDAE. PDAE is an autoencoder-based approach trained to match observed distributions using the energy score.

  • Components:
    • Encoder (g:RdXRdZg: \mathbb{R}^{d_X} \to \mathbb{R}^{d_Z}): Maps observations to estimated perturbed latents.
    • Perturbation Matrix (W^RdZ×K\hat{W} \in \mathbb{R}^{d_Z \times K}): A trainable matrix representing the latent shifts per elementary perturbation.
    • Stochastic Decoder (f:RdZ×RdϵRdXf: \mathbb{R}^{d_Z} \times \mathbb{R}^{d_\epsilon} \to \mathbb{R}^{d_X}): Maps (perturbed) latents and noise to observations.
  • Training: PDAE is trained by minimizing a combined loss function using mini-batches of observed data.
    • Perturbation Loss: A sum of pairwise energy scores between the true empirical distribution of data from domain hh (PhP_h) and the simulated distribution for domain hh generated from data from domain ee (P^eh\hat{P}_{e \to h}), summed over all training pairs (e,h)(e, h). The simulated distribution P^eh\hat{P}_{e \to h} is generated by encoding samples from domain ee, applying the perturbation shift W^(lhle)\hat{W}(l_h - l_e), and decoding with noise. The energy score ESβ(P,x)=12EX,XPXXβEXPXxβES_\beta(P, x) = \frac{1}{2}E_{X,X' \sim P} \|X-X'\|^\beta - E_{X \sim P} \|X-x\|^\beta is used as the distributional dissimilarity measure, leveraging its property as a strictly proper scoring rule.
    • Conditional Reconstruction Loss: A sum of domain-specific energy scores between the true empirical distribution of XeX_e conditioned on its encoding g(Xe)g(X_e), and the distribution induced by decoding g(Xe)g(X_e) with noise. This helps regularize the encoder-decoder pair.
    • The perturbation matrix W^\hat{W} can be estimated in closed form (least squares) given the encoded mean shifts, or learned jointly. Encoder and decoder parameters are updated via stochastic gradient descent.
  • Prediction: To predict the distribution for ltestl_{test}, PDAE takes samples from each training domain ee, encodes them (g(xe,i)g(x_{e,i})), shifts the latent representation using the learned matrix W^\hat{W} and the perturbation labels: z^etest,ipert=g(xe,i)+W^(ltestle)\hat{z}_{e \to test, i}^{pert} = g(x_{e,i}) + \hat{W}(l_{test} - l_e), and decodes these perturbed latents with noise: x^etest,i=f(z^etest,ipert,ϵ)\hat{x}_{e \to test, i} = f(\hat{z}_{e \to test, i}^{pert}, \epsilon). The final predicted distribution for ltestl_{test} is the empirical distribution of the pooled synthetic samples from all training source domains: P^test=1M+1e=0MP^etest\hat{P}_{test} = \frac{1}{M+1} \sum_{e=0}^M \hat{P}_{e \to test}.

Implementation Considerations

  • Data Size: Requires sufficient samples per perturbation condition to reliably estimate empirical distributions and energy scores.
  • Model Architecture: Encoder and decoder can be implemented using standard neural network architectures like MLPs, with dimensions appropriate for dXd_X, dZd_Z, and dϵd_\epsilon.
  • Computational Cost: Training involves computing energy scores over mini-batches, which requires sampling multiple times from the decoder for each item in the batch to estimate the expectations. The perturbation loss sums over all pairs of training domains, leading to O(M2)O(M^2) terms per batch. This could become computationally expensive for a very large number of training domains. Standard optimization techniques like Adam can be used.
  • Hyperparameters: The trade-off parameter λ\lambda for the reconstruction loss, the β\beta parameter for the energy score, learning rates, and network architecture details (number of layers, units, dZd_Z) need tuning.
  • Latent Dimensionality (dZd_Z): The theoretical results indicate identifiability requires dZd_Z to be the true dimension of the perturbation-relevant latent space. In practice, dZd_Z is a hyperparameter to be chosen.
  • Sufficient Diversity: The theoretical results rely on training perturbations satisfying a rank condition. While not explicitly enforced in training, performance might degrade if this condition is severely violated by the training data.

Experimental Evaluation

The paper provides preliminary results on synthetic 2D data and a robustness test with added noise dimensions.

  • On synthetic data, PDAE achieves near-perfect distributional and mean prediction on "in-distribution" (ID) test cases (where perturbed test latents fall within the support of perturbed training latents). This empirically validates the theory's extrapolation guarantees under ideal conditions.
  • Compared to baselines (Pool All, Pseudobulking, Linear Regression) and the compositional perturbation autoencoder (CPA), PDAE shows superior performance on ID test cases in terms of energy distance, MMD, and mean error.
  • On "out-of-distribution" (OOD) test cases (where perturbed test latents fall outside the training latent support), all methods perform significantly worse, though PDAE is still the least bad. This highlights a key practical challenge: the decoder must extrapolate to unseen latent inputs, which is not guaranteed by the identifiability theory that assumes full support Gaussian latents.
  • The robustness experiment with added noise shows that PDAE, when using the conditional reconstruction loss, can maintain competitive performance under low to moderate noise levels.

Practical Implications and Limitations

The PDAE provides a theoretically grounded approach for predicting distributions of biological responses to unseen perturbations, potentially reducing the need for expensive experiments. By targeting distributional prediction, it offers a richer output than methods limited to predicting means.

The main practical limitation is the decoder's ability to generalize to latent inputs outside the training data's support. While the perturbation model might correctly shift the latent representation, the decoder might map this novel latent location to an incorrect observation distribution if it hasn't seen similar inputs during training. Quantifying the uncertainty in such OOD predictions is an important area for future work. The current theory assumes a deterministic, invertible decoder and Gaussian latents for identifiability, which may not hold in real-world biological systems, although the method empirically performs well without enforcing these strictly.

In summary, the paper presents a novel, theoretically-backed method for compositional distributional extrapolation, particularly promising for biological perturbation data. The PDAE implementation leverages energy scores for distribution matching and demonstrates strong performance on synthetic data, especially for test conditions compositionally related to the training data.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets