Papers
Topics
Authors
Recent
Search
2000 character limit reached

Fair Supervised Contrastive Loss (FSCL)

Updated 25 February 2026
  • FSCL is a fairness-aware loss formulation that adjusts sampling, metric structure, and regularization to prevent encoding sensitive attributes.
  • It integrates margin-based constraints, group-wise normalization, and distribution matching to balance discrimination accuracy with fairness objectives.
  • Empirical results in vision and graph domains show that FSCL reduces bias metrics while maintaining high classification performance across diverse datasets.

Fair Supervised Contrastive Loss (FSCL) is a family of loss formulations and training paradigms designed to address fairness concerns in supervised contrastive learning. FSCL explicitly modifies the sampling, metric structure, and/or regularization in supervised contrastive loss to prevent models from encoding or amplifying sensitive-attribute information and data biases in the learned representations. These losses have been instantiated across computer vision and graph learning domains, with several distinct but related formalizations. FSCL achieves both controlling for statistical/probabilistic fairness criteria (e.g., statistical parity, equal opportunity, group-conditional E/O) and maintaining discriminative accuracy across groups and classes.

1. Theoretical Underpinnings and Rationale

Standard supervised contrastive learning (SupCon, InfoNCE) pulls together representations of samples that share a target class, while pushing apart those from different classes. In the presence of data biases—spurious correlations between sensitive attributes (e.g., gender, age, color) and the target label—SupCon can minimize its objective by encoding sensitive-attribute information, inadvertently causing unfairness. If the dataset is demographically imbalanced, majority groups attain more compact and better-separated clusters than minority groups, further exacerbating group-wise disparities (Park et al., 2022).

FSCL addresses these pathologies by explicitly constructing similarity comparisons and/or regularization such that encoding sensitive-attribute information does not reduce the loss, and by calibrating contributions across demographic groups to avoid unfair over-weighting. Some variants introduce additional margin-based or distribution-matching constraints to robustify against spurious bias signals (Barbano et al., 2022).

2. Formal Definitions and Loss Formulations

Multiple formalizations of FSCL exist across tasks. Three representative formulations are:

Paper / Domain Core FSCL Formulation Key Mechanism
Image (vision) (Park et al., 2022) Partition positives/negatives by (target, sensitive) label, restrict negatives to same-sensitive negatives Penalizes encoding of sensitive attributes; group-wise normalization for fairness
Margin-based (Barbano et al., 2022) ϵ\epsilon-SupInfoNCE with explicit margin; FairKL distance-distribution regularizer Large positive/negative margin; match bias-aligned/conflicting distributions
Graph neural networks (Kejani et al., 2024) Contrastive loss on the "content" subspace, pulling same-label nodes together irrespective of sensitive group Drives all nodes with the same label to cluster, erasing sensitive attribute from representations

In vision tasks (Park et al., 2022), for anchor embedding ziz_i (target yiy_i, sensitive sis_i):

LFSCL=i=12N1Zp(i)pZp(i)logexp(zizp/τ)nZtg(i)exp(zizn/τ)L_{\mathrm{FSCL}} = -\sum_{i=1}^{2N} \frac{1}{|Z_p(i)|} \sum_{p\in Z_p(i)} \log \frac{ \exp(z_i \cdot z_p / \tau) }{ \sum_{n \in Z_{tg}(i)} \exp(z_i \cdot z_n / \tau) }

where positives Zp(i)Z_p(i) include all (target, any-sensitive) matches, negatives Ztg(i)Z_{tg}(i) are different target but same-sensitive.

For margin-based FSCL (Barbano et al., 2022), for anchor xx, positives xi+x_i^+, negatives xjx_j^-:

Lϵ-SupInfoNCE=i=1Plogexp(si+)exp(si+ϵ)+j=1Nexp(sj)\mathcal{L}_{\epsilon\text{-SupInfoNCE}} = -\sum_{i=1}^P \log \frac{ \exp(s_i^+) }{ \exp(s_i^+ - \epsilon) + \sum_{j=1}^N \exp(s_j^-) }

where si+=f(x)f(xi+)s_i^+ = f(x)^\top f(x_i^+), sj=f(x)f(xj)s_j^- = f(x)^\top f(x_j^-).

A FairKL regularizer matches the empirical distributions (via KL divergence) of bias-aligned and bias-conflicting pairs, further neutralizing group disparities in distance.

In graph settings (Kejani et al., 2024), the FSCL formulation only uses the classification-related projection ("content" subspace) and groups by current (pseudo-)labels only, thus actively discarding sensitive-attribute signals from this space.

3. Training Procedure and Hyperparameterization

The FSCL training workflow generally comprises:

  1. Batch construction: For each anchor, construct positives (by target and/or sensitive labels) and precisely filtered negatives (e.g., same-sensitive, different-target).
  2. Contrastive computation: Evaluate the FSCL loss as detailed above, possibly alongside margin or distribution matching terms.
  3. FairKL regularization (optional): Compute KL-divergence between bias-aligned/bias-conflicting distances for both positive and negative pairs; add as an explicit penalty (Barbano et al., 2022).
  4. Group-wise normalization (FSCL+): Normalize contributions to the loss by group cardinalities over (class,sensitive)(\text{class}, \text{sensitive}) tuples, balancing intra-group compactness across demographics (Park et al., 2022).
  5. Joint objective: Sum FSCL with standard task losses (e.g., cross-entropy for classification) and other regularizers (e.g., invariance, environmental loss in GNNs (Kejani et al., 2024)).
  6. Optimization: Hyperparameters such as margin ϵ\epsilon (vision: [0.1,0.5][0.1, 0.5]), weights for contrastive/fairness terms (α,λ\alpha, \lambda), and temperature τ\tau (vision: $0.1$, GNN: $0.07$) are tuned by grid search or validated on held-out sets.

Variants also support incomplete supervision, using pseudo-labels or partial label strategies for target/sensitive assignments (Park et al., 2022, Kejani et al., 2024).

4. Empirical Evaluation and Benchmarks

Empirical validation covers image classification, facial attribute learning, and tabular/graph data for node classification. Key datasets and evaluation criteria include:

  • Vision (Park et al., 2022, Barbano et al., 2022): CelebA, UTK Face, CIFAR-10/100, ImageNet-100, Biased-MNIST, bFFHQ.
  • Fairness metrics: Equalized odds (EO), statistical parity (ΔSP\Delta_{SP}), and equal opportunity (ΔEO\Delta_{EO}), in addition to standard accuracy/AUC.

Salient results:

  • On CelebA, standard SupCon achieves \sim80% accuracy but EO \approx30%; FSCL achieves EO \approx11% at only 1.4% drop in accuracy, and FSCL+ cuts EO to \approx6% (Park et al., 2022).
  • On Biased-MNIST (color bias), FSCL achieves \sim90.5% vs. 11–60% for cross-entropy and LfF baselines (Barbano et al., 2022).
  • On tabular GNN benchmarks (German Credit, Bail, Credit Defaulter), FSCL in SCCAF yields both highest AUC/F1 and lowest statistical parity/equal opportunity disparities versus CAF and other fair GNN methods (Kejani et al., 2024).

In all settings, FSCL closes accuracy/fairness tradeoffs beyond prior state-of-the-art methods such as GRL, LNL, FD-VAE, FairGNN, and EDITS.

5. Mechanisms for Fairness and Debiasing

FSCL mechanisms are united in their goal to prevent sensitive-attribute information from reducing the loss or contributing to downstream classification. The principal mechanisms are:

  • Negative sample restriction: Constrains negatives so that they do not incentivize encoding sensitive-attribute signals.
  • Margin enforcement: Requires a minimal positive-negative separation robust to spurious cues (Barbano et al., 2022).
  • Distribution matching (FairKL): Normalizes and aligns distance distributions between bias-aligned and bias-conflicting samples, rendering them indistinguishable in representation space (Barbano et al., 2022).
  • Group-wise normalization: Ensures each (class,sensitive)(\text{class},\text{sensitive}) group is represented equally, controlling for demographic imbalance (Park et al., 2022).
  • Content–environment disentanglement: In GNNs, latent space is split to localize sensitive-attribute signals separately from task-relevant representations, enforced via environmental losses (Kejani et al., 2024).

6. Practical Implementation and Limitations

Published FSCL implementations use standard neural encoders (e.g., ResNet-18 for vision; GCN or GraphSAGE for GNNs), and off-the-shelf optimizers (SGD, Adam). Loss and batch computation can be modularized to support new groupings and group-wise normalization. FSCL is robust to partial or noisy supervision, with explicit strategies for pseudo-labeling (Park et al., 2022, Kejani et al., 2024).

Limitations include the need for sensitive-attribute annotations or accurate pseudo-labels, sensitivity to margin and weighting choices, and computational costs for large-batch or group-normalized operations. The method assumes explicit or inferable group structure.

7. Relation to Broader Research and Future Directions

FSCL generalizes classical contrastive fairness approaches by directly operationalizing the removal of sensitive-attribute information from learned representations while maintaining class discrimination. It positions itself relative to adversarial fairness learning (e.g., GRL), statistical-matching/penalty methods (e.g., FD-VAE), and recent counterfactual or invariance-driven graph learning approaches (Kejani et al., 2024).

Future research directions include calibration for multi-class/multi-attribute sensitive variables, extension to self-supervised (unlabeled) or few-shot settings, and exploration of more theoretically grounded group fairness metrics. A plausible implication is that FSCL-style approaches could serve as a foundation for fairness certification in large-scale embedding models, as well as adapting to evolving notions of group/individual fairness over diverse modalities.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Fair Supervised Contrastive Loss (FSCL).