Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 37 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 125 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

In-Context Operator Networks (ICON)

Updated 9 September 2025
  • In-context Operator Networks (ICON) are transformer-based methods that map demonstration pairs from differential equations to hidden solution operators.
  • ICON leverages pre-training on simulated datasets to infer operators implicitly without weight updates during inference, enabling efficient forward and inverse predictions.
  • GenICON extends ICON by generating full posterior predictive distributions, offering a Bayesian framework for uncertainty quantification in scientific forecasting.

In-Context Operator Networks (ICON) are a class of operator learning methods built upon transformer-based foundation models, designed to map in-context demonstration pairs drawn from differential equations to representations of the hidden solution operator. ICON leverages pre-training on diverse datasets and in-context inference via data prompts, thus amortizing operator learning across families of ordinary and partial differential equations (ODEs, PDEs) and providing a Bayesian and generative framework for uncertainty quantification in scientific prediction tasks.

1. ICON Foundations: Architecture and Functional Principle

ICON is instantiated via transformer architectures (encoder–decoder or decoder-only), with model inputs consisting of J1J-1 demonstration pairs (yj,zj)(y^j, z^j) of conditions (initial/boundary data) and solutions, all generated by the same (but unobserved) operator. The network is trained to implement

Tθ:Y×(Y×Z)J1Z,\mathcal{T}_\theta: Y \times (Y \times Z)^{J-1} \to Z,

where YY is the (possibly infinite-dimensional) space of conditions and ZZ is the solution space. The model outputs a prediction zJz^J for a new condition yJy^J, inferring the shared context operator "on the fly". Operator learning is performed via pre-training on varied simulated datasets of condition–solution pairs from many differential equations, intentionally omitting explicit knowledge of model parameters (α\alpha).

At inference, ICON predicts the solution to a new condition leveraging only a finite context of demonstration pairs. No weight update occurs during inference; instead, the transformer architecture internalizes operator inference via its forward pass conditioned on the prompt.

2. Operator Learning as Implicit Bayesian Inference

The probabilistic semantics of ICON are formalized in the framework of random differential equations (RDEs). In this setting, parameters (α\alpha), conditions (yy), and solutions (zz) are Hilbert or Banach space-valued random variables with a joint measure

Pα,y,z=PαPyPzy,α.\mathbb{P}_{\alpha, y, z} = \mathbb{P}_\alpha \otimes \mathbb{P}_y \otimes \mathbb{P}_{z|y,\alpha}.

ICON trains on (y,z)(y, z) pairs while the true operator parameter α\alpha remains latent, so the context implicitly encodes α\alpha. The network objective minimizes expected squared error over the dataset,

minθ1Mm=1MzmJTθ(ymJ;{(ymj,zmj)}j=1J1)Z2,\min_\theta \frac{1}{M}\sum_{m=1}^M \| z^J_m - \mathcal{T}_\theta(y^J_m ; \{(y^j_m, z^j_m)\}_{j=1}^{J-1}) \|_Z^2,

which, by the projection theorem on Hilbert spaces, means ICON approximates the conditional expectation

T(yJ;{(yj,zj)})=E[zJyJ,{(yj,zj)}].\mathcal{T}^*(y^J ; \{(y^j, z^j)\}) = \mathbb{E}[z^J | y^J, \{(y^j, z^j)\}].

Consequently, ICON implicitly computes the mean of the posterior predictive distribution conditioned on the prompt:

PzJyJ,context=PzJyJ,αPαcontextdα.\mathbb{P}_{z^J | y^J, \text{context}} = \int \mathbb{P}_{z^J | y^J, \alpha} \mathbb{P}_{\alpha | \text{context}} d\alpha.

This architecture is amortized and likelihood-free, never explicitly representing the operator posterior; predictions arise directly from joint examples.

3. Generative ICON (GenICON) and Uncertainty Quantification

ICON is extended to generative settings via GenICON, enabling sampling from the full posterior predictive distribution PzJyJ,context\mathbb{P}_{z^J | y^J, \text{context}} (not solely its mean). GenICON introduces a conditional generative model

G:H×Y×(Y×Z)J1Z,\mathcal{G}: H \times Y \times (Y \times Z)^{J-1} \to Z,

where HH is the noise source, and for any fixed (yJ,context)(y^J, \text{context}), G(,yJ,context)#Pη=PzJyJ,context\mathcal{G}(\cdot, y^J, \text{context})_{\#} \mathbb{P}_\eta = \mathbb{P}_{z^J | y^J, \text{context}}. The existence of such a measurable mapping is established by construction using the formalism for random differential equations. In GenICON, the ensemble of samples naturally quantifies solution operator uncertainty, crucial for reliable scientific forecasting and inverse problems.

Furthermore, the conditional expectation over GenICON's generative outputs recovers the original ICON prediction:

Eη[G(η,yJ,context)]=T(yJ;context).\mathbb{E}_\eta [\mathcal{G}(\eta, y^J, \text{context})] = \mathcal{T}^*(y^J; \text{context}).

4. Practical Applications: Forward/Inverse Problems and Model Generality

ICON and GenICON are deployed across a spectrum of differential equation tasks:

  • Ordinary Differential Equations (ODEs): e.g., u(t,ω)=a(t,u(t,ω),ω)u'(t, \omega) = a(t, u(t, \omega), \omega), u(0,ω)=u0(ω)u(0, \omega) = u_0(\omega).
  • Boundary Value Problems (BVPs): e.g., 0.1a(ω)u(x,ω)+k(x,ω)u(x,ω)=c(x,ω)-0.1 a(\omega) u''(x, \omega) + k(x, \omega) u(x, \omega) = c(x, \omega).
  • Partial Differential Equations (PDEs): Conservation laws, reaction–diffusion equations, and more.

The methodology is robust to both forward prediction (from initial/boundary data) and inverse problem settings (estimating model parameters or recovering hidden states). The probabilistic structure allows ICON to handle ill-posed inverse tasks and non-identifiability (where distinct parameter choices yield identical observations) via uncertainty-aware modeling.

Empirical studies report accurate predictions even for small contexts (demonstration sets) and with operator families not seen during training.

5. Comparative Analysis with Classical Operator Learning

ICON diverges from classical supervised operator learning methods such as DeepONet or Fourier Neural Operators, which require explicit parameter input and fixed training pairs to approximate deterministic mappings. Instead, ICON’s in-context learning leverages the context to infer the latent operator or parameter implicitly, adapting model predictions dynamically without retraining. The probabilistic formulation reveals that while classical methods estimate conditional expectations given explicit parameters, ICON infers the posterior predictive mean conditioned solely on the observed demo pairs.

In the generative framework, GenICON offers a rigorous Bayesian treatment by modeling and quantifying the posterior predictive distribution, a property unavailable in classical deterministic operator learners. This feature is essential for scientific problems involving noisy or incomplete data.

6. Key Mathematical Formalism

The mathematical backbone of ICON involves the following constructs:

  • ICON Mapping:

Tθ(yJ;{(yj,zj)})zJ\mathcal{T}_\theta(y^J ; \{(y^j, z^j)\}) \approx z^J

  • Conditional Expectation (Bayesian Predictive Mean):

T(yJ;{(yj,zj)})=E[zJyJ,context]\mathcal{T}^*(y^J ; \{(y^j, z^j)\}) = \mathbb{E}[z^J | y^J, \text{context}]

  • Posterior Predictive Distribution:

PzJyJ,context=PzJyJ,αPαcontextdα\mathbb{P}_{z^J | y^J, \text{context}} = \int \mathbb{P}_{z^J | y^J, \alpha} \mathbb{P}_{\alpha | \text{context}} d\alpha

  • GenICON Generative Sampling:

G:H×Y×(Y×Z)J1Z,Eη[G(η,yJ,context)]=T(yJ;context)\mathcal{G}: H \times Y \times (Y \times Z)^{J-1} \to Z, \qquad \mathbb{E}_\eta [\mathcal{G}(\eta, y^J, \text{context})] = \mathcal{T}^*(y^J; \text{context})

7. Methodological Implications and Prospects

ICON provides a principled basis for operator learning in scientific machine learning, unifying empirical transformer architectures with Bayesian statistical prediction. Its ability to operate in a likelihood-free and amortized manner confers adaptability in settings with heterogeneous data and latent model parameters. GenICON’s generative capability extends this versatility by producing principled uncertainty estimates, facilitating robust scientific modeling and risk quantification.

A plausible implication is that ICON could be used to “fine-tune” scientific foundation models for new equations, operators, or physical regimes without retraining—merely by providing a handful of informative demos. Further, the generative setting offers promising directions for probabilistic embeddings and posterior sampling in operator learning.

In summary, ICON exemplifies a shift in operator learning: from deterministic mapping to context-driven, probabilistic, and generative inference, thus establishing a rigorous framework for foundation model development in differential equation tasks (Zhang et al., 5 Sep 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to In-context Operator Networks (ICON).