Papers
Topics
Authors
Recent
2000 character limit reached

Probabilistic Language-Audio Pre-Training

Updated 29 November 2025
  • The paper introduces a novel framework that models language and audio as Gaussian distributions to capture many-to-many relations and inherent uncertainty.
  • It employs hierarchical inclusion and mask repulsive losses to enforce semantic containment and prevent embedding collapse in the joint space.
  • The method achieves state-of-the-art performance in audio-text retrieval and hierarchical reasoning on modest datasets without relying on large-scale external corpora.

Probabilistic Language-Audio Pre-Training (ProLAP) is a joint representation learning framework that encodes language and audio as Gaussian distributions in a shared embedding space. This probabilistic framework is designed to capture the intrinsic many-to-many relationships between audio recordings and textual descriptions, as well as the natural uncertainty and hierarchical inclusions found in real-world data. ProLAP introduces two key objectives—hierarchical inclusion loss and mask repulsive loss—to enable fine-grained modeling of semantic hierarchy and uncertainty, and achieves state-of-the-art performance on audio-text retrieval and hierarchical reasoning tasks, without requiring large-scale external data (Manabe et al., 21 Oct 2025).

1. Motivation and Problem Setting

Most existing language–audio embedding models (e.g., CLAP) employ deterministic mappings, assigning each input an individual point in the joint embedding space. This approach enforces a one-to-one mapping between inputs, failing to account for the many-to-many correspondences that occur in practice: an audio clip may have multiple equally valid captions of varying specificity (such as “string instrument”, “guitar”, or “acoustic guitar”), and captions themselves may admit many paraphrases. Deterministic embeddings cannot adequately capture this multiplicity or represent the associated epistemic uncertainty.

To address this, ProLAP models each input as a probability distribution (specifically, a Gaussian) in the joint space. This probabilistic representation allows explicit modeling of the spread (uncertainty due to ambiguous or multiple descriptions) and enables encoding of hierarchical semantic relations (e.g., the inclusion of specific in general concepts). This framework is particularly important in settings where the language–audio relationship is inherently uncertain and hierarchical (Manabe et al., 21 Oct 2025).

2. Probabilistic Embedding Design

2.1 Gaussian Embedding Parameterization

Each audio or text input is mapped to a Gaussian random variable: ZN(μ,Σ)Z \sim \mathcal{N}(\mu, \Sigma) where μRd\mu \in \mathbb{R}^d is the mean embedding and Σ=diag(σ2)Rd×d\Sigma = \mathrm{diag}(\sigma^2) \in \mathbb{R}^{d \times d} is a diagonal covariance matrix.

2.2 Corrected Similarity Measure

Affinity between two Gaussian embeddings Za(μa,Σa)Z_a \sim (\mu_a, \Sigma_a) and Zt(μt,Σt)Z_t \sim (\mu_t, \Sigma_t) is measured using a corrected dot-product similarity: s(Za,Zt)=μaμt12tr(Σa+Σt)s(Z_a, Z_t) = \mu_a^\top \mu_t - \frac{1}{2} \mathrm{tr}(\Sigma_a + \Sigma_t) This measure penalizes high uncertainty and encourages mean alignment in the joint space.

2.3 Distance Interpretation

Instead of KL or Wasserstein divergences, ProLAP employs a closed-form corrected sampled distance (CSD) embedded in the similarity function above. This approach efficiently aligns means and controls variances, facilitating gradient-based optimization.

3. Loss Functions and Hierarchical Learning

3.1 Probabilistic Pairwise Contrastive Loss (PPCL)

For audio-text pairs, ProLAP uses a probabilistic variant of contrastive loss: LPPCL(Za,Zt)=log11+exp(yat[αs(Za,Zt)+β])\mathcal{L}_\mathrm{PPCL}(Z_a, Z_t) = -\log \frac{1}{1 + \exp(y_{at} [-\alpha s(Z_a, Z_t) + \beta])} where yat=+1y_{at} = +1 for positive pairs and 1-1 otherwise, with α,β\alpha, \beta as learnable parameters.

3.2 Inclusion Loss

To enforce semantic containment (e.g., "acoustic guitar" inside "guitar"), ProLAP introduces an inclusion statistic: H(Z1Z2)=logp12(x)p2(x)dxlogp1(x)p22(x)dx\mathcal{H}(Z_1 \subset Z_2) = \log \int p_1^2(x) p_2(x) dx - \log \int p_1(x) p_2^2(x) dx The loss is computed as a logit-linked negative log-likelihood: Linc(Z1Z2)=log11+exp(cH(Z1Z2))\mathcal{L}_\mathrm{inc}(Z_1 \subset Z_2) = -\log \frac{1}{1 + \exp(-c\, \mathcal{H}(Z_1 \subset Z_2))} with c>0c > 0. Cross-modal inclusion encourages audio distributions to be less uncertain than their captions; intra-modal inclusion applies between raw and masked variants.

3.3 Hierarchical Inclusion Loss

A chain of nested random masks M0M1MLM_0 \supset M_1 \supset \cdots \supset M_L is constructed to expose multi-level inclusion structure: Linch(Zp)=i=0L1Linc(ZpMi+1ZpMi)\mathcal{L}^h_\mathrm{inc}(Z_p) = \sum_{i=0}^{L-1} \mathcal{L}_\mathrm{inc}(Z_{p^{M_{i+1}}} \subset Z_{p^{M_i}}) This incentivizes a consistent containment hierarchy in latent space.

3.4 Mask Repulsive Loss

To prevent collapse of masked embeddings, a repulsive loss pushes masked versions of inputs apart: LMR(Zp,Zq)=i=1L1log11+exp(ypq[αs(ZpMi,ZqMi)+β])\mathcal{L}_\mathrm{MR}(Z_p,Z_q) = -\sum_{i=1}^{L-1} \log \frac{1}{1+\exp(y_{pq} [-\alpha s(Z_{p^{M_i}}, Z_{q^{M_i}})+\beta])} for ypq=0y_{pq} = 0 (same sample) or 1-1 (otherwise). Gradients with respect to variances are stopped to avoid trivialization.

3.5 Variational Information Bottleneck Regularization

A small VIB regularizer, LVIB(Z)\mathcal{L}_\mathrm{VIB}(Z), prevents variance collapse and supports consistent uncertainty modeling.

3.6 Full Training Objective

The complete objective for a batch is: L=ZaAZtTLPPCL(Za,Zt) +ZaALintra(Za)+ZtTLintra(Zt) +λ3(Za,Zt)(A,T)Linc(ZaZt)\begin{align*} \mathcal{L} =& \sum_{Z_a \in \mathcal{A}} \sum_{Z_t \in \mathcal{T}} \mathcal{L}_\mathrm{PPCL}(Z_a, Z_t) \ &+ \sum_{Z_a \in \mathcal{A}} \mathcal{L}_\mathrm{intra}(Z_a) + \sum_{Z_t \in \mathcal{T}} \mathcal{L}_\mathrm{intra}(Z_t) \ &+ \lambda_3 \sum_{(Z_a, Z_t) \in (\mathcal{A}, \mathcal{T})} \mathcal{L}_\mathrm{inc}(Z_a \subset Z_t) \end{align*} with intra-modal loss: Lintra(Zp)=λ1Linch(Zp)+λ2ZpLMR(Zp,Zp)+γLVIB(Zp)\mathcal{L}_\mathrm{intra}(Z_p) = \lambda_1 \mathcal{L}^h_\mathrm{inc}(Z_p) + \lambda_2 \sum_{Z'_p} \mathcal{L}_\mathrm{MR}(Z_p, Z'_p) + \gamma \mathcal{L}_\mathrm{VIB}(Z_p) ProLAP demonstrated effective learning with relatively low hyperparameter regularization values and L=3L=3 mask levels.

4. Training Protocol and Dataset Considerations

ProLAP fine-tunes pretrained CLAP weights for 50 epochs with batch size 256, utilizing the Adam optimizer and cosine learning rate decay (peak 1×1051 \times 10^{-5}) with a one-epoch warm-up. Masking is applied to 75% of features in 12.5% of each batch for intra-modal learning.

Datasets:

  • AudioCaps: 51,308 clips (1 caption each)
  • ClothoV2: 5,930 clips (5 captions each)

No large-scale external corpus is used; hierarchical structure emerges from these relatively small sets.

Feature Encoders:

  • Audio: HTS-AT (Swin-Transformer-based), with a learnable [MASK] token head.
  • Text: GPT-2, extracting the “[CLS]” token embedding for the mean and a special “[UNC]” token for variance.

This design enables ProLAP to learn robust hierarchical uncertainty in the language–audio domain even at small data scales, a marked contrast with prior probabilistic models in vision requiring orders of magnitude more data.

5. Empirical Results and Evaluation

5.1 Audio–Text Retrieval

Tasks measured include text-to-audio and audio-to-text retrieval using Recall@1/5/10 and mAP@10. Baselines considered are deterministic CLAP with InfoNCE, CLAP+Sigmoid (SigLIP), and a ProLIP-style probabilistic variant. ProLAP consistently outperforms these alternatives, especially under cross-dataset (out-of-domain) evaluation.

5.2 Uncertainty Estimation

Text length vs. predicted uncertainty: ProLAP, with hierarchical inclusion and mask repulsive objectives, produces variance that is strongly and negatively correlated with caption length—longer (more specific) captions yield lower uncertainty. Baseline models, in contrast, show negligible variance trends with specificity.

Audio embedding visualization: Under ProLAP, masked and unmasked embeddings remain well-separated in the latent space, and their inclusion ordering reflects semantic containment, unlike the baseline where masking effects collapse.

5.3 Audio Traversal Task

A new “audio traversal” task is introduced: for each AudioCaps clip, four levels of increasingly abstract captions are generated by a LLM. A “root” ([ROOT]) embedding is defined and embeddings are linearly interpolated from the audio to the root in 50 steps, with text retrieval at each step.

Metrics:

  • Precision: Fraction of retrieved captions matching any hierarchical level.
  • Recall@1: Likelihood of recovering the original caption at varying abstraction.
  • R@1 at most abstract level: Capturing top-level generalization.

ProLAP (hierarchical inclusion + repulsion) significantly improves precision (~27.3% vs. 12.8% for CLAP/SigLIP) and recall performance.

5.4 Ablation and Analysis

  • Hierarchical inclusion alone: Yields substantial gains in traversal precision (13.5% → 23.3%) and inclusion accuracy (level-1 includes level-4: 63% → 83%).
  • Mask repulsive alone: Modest or negative effect.
  • Both losses: Achieve the highest hierarchical and retrieval performance (precision ~27.3%, inclusion-test ~89.5%).

6. Analysis, Limitations, and Open Questions

Probabilistic embeddings with hierarchical inclusion loss robustly capture the many-to-many mapping and semantic containment between audio and text. The mask repulsive objective prevents collapse under masking, making multi-scale inclusion feasible. ProLAP demonstrates notable data efficiency, achieving meaningful hierarchical uncertainty structure with tens of thousands of training pairs—unlike probabilistic vision models trained on billions of samples.

Open questions and limitations:

  • Only diagonal-covariance Gaussian distributions are explored; richer mixture models may further improve uncertainty representation.
  • The cross-modal inclusion loss weight λ3\lambda_3 must remain very small to avoid harming retrieval, indicating sensitivity and the need for balanced regularization.
  • Scalability to massive audio–text corpora and generalization to tri-modal setups (audio, text, video) remain open research directions.

7. Implications, Extensions, and Future Work

ProLAP systematically extends the CLAP deterministic encoder architecture by embedding input distributions, allowing explicit modeling of many-to-many semantic relations and fine-grained hierarchical containment. Two architectural losses—hierarchical inclusion and mask repulsive—are central to learning both semantic uncertainty and hierarchy.

Empirical outcomes indicate consistent improvements in retrieval and hierarchical tasks over deterministic baselines, and demonstrate that the learned uncertainties are meaningful in practical evaluation (e.g., audio traversal, inclusion tests). Future work may explore non-Gaussian or multimodal embedding families, alternative divergence measures (e.g., Wasserstein), and scaling to richer benchmarks such as WavCaps and Auto-ACD, as well as the extension to unified audio–video–language pre-training (Manabe et al., 21 Oct 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Probabilistic Language-Audio Pre-Training (ProLAP).