Papers
Topics
Authors
Recent
Search
2000 character limit reached

CMAda: Curriculum Adaptation in Fog Segmentation

Updated 15 April 2026
  • Curriculum Model Adaptation (CMAda) is a domain adaptation methodology that progressively trains segmentation models from synthetic light fog images to dense real fog, bridging the annotation gap.
  • It leverages supervised learning on synthetic data combined with pseudo-labeling of real images, guided by physics-based fog simulation to enhance model robustness.
  • Empirical results on Foggy Zurich demonstrate significant improvements in mIoU, validating CMAda’s effectiveness against severe visual and statistical domain shifts.

Curriculum Model Adaptation (CMAda) is a domain adaptation methodology designed for semantic segmentation in adverse conditions—primarily dense fog—where labeled real data are scarce, and synthetic data can be controllably generated. By structuring training as a curriculum that progresses from easier (light synthetic fog) to more challenging (dense real fog) domains with both synthetic and real images, CMAda enables robust model transfer between domains exhibiting strong visual and statistical shifts. The framework leverages a combination of supervised learning on labeled synthetic datasets, pseudo-labeling of real images, and physics-guided data synthesis to bridge the domain gap without relying on real fog annotations (Sakaridis et al., 2018, 1901.01415).

1. Problem Formulation and Motivation

CMAda addresses semantic segmentation under severe domain shift, as in the transition from clear to foggy weather. The principal challenge is the lack of annotated real foggy images, motivating the use of synthetic fog simulation and unlabeled real data. The framework defines a set of domains and datasets:

  • x∈Xx \in \mathcal{X}: clear-weather images,
  • x′∈X′x' \in \mathcal{X}': light-synthetic-fog images, synthesized from clear scenes with an attenuation coefficient β1\beta_1,
  • x′′∈X′′x'' \in \mathcal{X}'': dense-synthetic-fog images, synthesized with β2>β1\beta_2 > \beta_1,
  • xˉ\bar{x}: real foggy images with unknown true density,
  • y∈Yy \in \mathcal{Y}: ground-truth semantic segmentation labels.

The data is structured as:

  • Dl′={(xi′,yi)}i=1l\mathcal{D}'_l = \{(x'_i, y_i)\}_{i=1}^l: labeled light-synthetic-fog images,
  • Dl′′={(xi′′,yi)}i=1l\mathcal{D}''_l = \{(x''_i, y_i)\}_{i=1}^l: labeled dense-synthetic-fog images,
  • Du′={xˉj′}j=1u\mathfrak{D}'_u = \{\bar{x}'_j\}_{j=1}^u: unlabeled real images with light or moderate fog.

The objective is to learn a segmentation function x′∈X′x' \in \mathcal{X}'0 that generalizes effectively to dense real fog, using both synthetic and real data without dense real-fog annotation.

2. Curriculum Adaptation Schedule and Learning Objectives

CMAda implements a multi-step curriculum informed by the progressive difficulty of tasks. Training starts with the source model on clear-weather data, then adapts through intermediate domains according to fog density:

  • Step 1: Train on synthetic light-fog data,

x′∈X′x' \in \mathcal{X}'1

where x′∈X′x' \in \mathcal{X}'2 is the pixel-wise cross-entropy loss.

  • Step 2: Use x′∈X′x' \in \mathcal{X}'3 to pseudo-label a subset of real light-foggy images (selected via a fog density estimator).
  • Step 3: Fine-tune the model jointly on dense synthetic fog (with ground-truth labels) and the pseudo-labeled real images:

x′∈X′x' \in \mathcal{X}'4

with x′∈X′x' \in \mathcal{X}'5 balancing the loss terms; x′∈X′x' \in \mathcal{X}'6 in practice.

This curriculum mitigates the domain gap by bootstrapping from easier (synthetic) to increasingly realistic conditions, culminating in robust performance under dense real fog (Sakaridis et al., 2018, 1901.01415).

3. Synthetic Fog Generation and Data Processing

CMAda's fog simulation is based on Koschmieder's physical model:

x′∈X′x' \in \mathcal{X}'7

where x′∈X′x' \in \mathcal{X}'8 is the observed intensity, x′∈X′x' \in \mathcal{X}'9 the clear-scene radiance, β1\beta_10 the atmospheric light, β1\beta_11 the scene depth, and β1\beta_12 the fog density.

Depth completion and transmittance refinement employ a two-reference cross-bilateral filter, leveraging both semantic labels β1\beta_13 and color in the CIELAB space β1\beta_14:

β1\beta_15

with β1\beta_16, spatial β1\beta_17, color β1\beta_18. This algorithm sharpens transition regions and preserves semantic boundaries, producing synthetic fog that more closely resembles real-world conditions.

Pseudo-label generation for real images and curriculum stage selection are both conditional on fog density estimation using a CNN regressor (AlexNet) trained on synthetic images with known β1\beta_19, minimizing:

x′′∈X′′x'' \in \mathcal{X}''0

The estimator's ranking agrees with human perception x′′∈X′′x'' \in \mathcal{X}''1 of the time, ensuring reliable selection of domain-relevant real images for pseudo-labeling (Sakaridis et al., 2018).

4. Experimental Validation and Quantitative Impact

Empirical results demonstrate that CMAda substantially improves segmentation in dense fog compared to baselines trained solely on clear or synthetic data. For the Foggy Zurich test set (real, densely fogged), metrics are:

Method mIoU (all classes) mIoU (frequent classes)
Baseline clear model 32.0% —
Dense synthetic fine-tuning 33.2% —
CMAda step 1 (light synthetic only) 33.9% —
CMAda full (synthetic + weak real) 37.9% —

The strongest performance gains arise when the model is trained with both synthetic dense fog and real light-fog images with pseudo-labels. Consistent improvements are observed across both the Foggy Zurich and Foggy Driving-dense datasets, and ablations show the importance of curriculum pacing and mixed-supervision objectives (Sakaridis et al., 2018).

5. Extensions, Generalizations, and Comparative Frameworks

CMAda generalizes to other adverse-conditions adaptation tasks (e.g., day-to-night) by replacing fog density with an appropriate difficulty measure (e.g., illumination, time of day), as in the guided curriculum adaptation for semantic nighttime segmentation (Sakaridis et al., 2019). The core principle—a sequence of intermediate domains with pseudo-label-based self-training—remains constant, though pseudo-label refinement and pacing functions may vary.

Recent research in continuous domain adaptation applies and extends the curriculum principle by building domain transfer paths via Wasserstein distances, enforcing multi-path consistency constraints, and utilizing optimal transport for domain mapping. These contemporary frameworks provide theoretical guarantees for curriculum ordering and demonstrate further performance gains on both classification and regression tasks, generalizing CMAda's staged-adaptation approach to broader transfer settings (Liu et al., 2024).

6. Significance and Limitations

CMAda offers a principled solution to domain adaptation where annotation in the target domain is prohibitive, especially in adverse visual conditions. Its hybridization of supervised synthetic learning and self-training from real data anchors adaptation against escalation of pseudo-label noise. The methodology surpasses purely synthetic transfer and direct fine-tuning approaches for segmentation in real dense fog.

A plausible implication is that further gains may require improved synthetic–real domain bridging, domain-aware model architectures, or partial annotation in the target domain. The optimal curriculum schedule and loss balance hyperparameters are empirically tuned and may benefit from automated selection. Although demonstrated for fog and night, the approach may extend to other structured domain shift problems, contingent on the existence of a meaningful domain-difficulty axis and reliable difficulty estimation.

7. Key Datasets and Resources

CMAda is supported by the Foggy Zurich dataset: 3,808 real foggy images (unlabeled) and a subset with fine pixel-level annotations under dense fog, specifically designed for semantic scene parsing in adverse conditions. Additional datasets and codebases for fog simulation, density estimation, and curriculum model adaptation are public, facilitating reproducibility and extension in both academic and applied research contexts (Sakaridis et al., 2018, 1901.01415).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Curriculum Model Adaptation (CMAda).