Papers
Topics
Authors
Recent
2000 character limit reached

Unlearn to Explain Sonar Framework (UESF)

Updated 9 December 2025
  • The paper demonstrates that integrating targeted contrastive unlearning with an adapted LIME pipeline yields a 35% reduction in seafloor bias while maintaining a 99% accuracy.
  • UESF employs a dual-model approach, comparing baseline and unlearned classifiers to generate fine-grained, pixel-level difference maps that indicate effective bias suppression.
  • The framework utilizes superpixel decomposition and a modified triplet loss to achieve transparent, interpretable attribution maps that focus on object features rather than background artifacts.

The Unlearn to Explain Sonar Framework (UESF) is a post hoc explainability framework designed to assess, quantify, and visualize the extent to which background (seafloor) bias is removed from sonar image classifiers following a targeted machine unlearning process. UESF sits atop a conventional sonar image classifier and its retrained, “unlearned” counterpart, enabling fine-grained, pixel-level analyses of which environmental features the model no longer relies on after mitigation of seafloor-induced confounding. By coupling targeted contrastive unlearning (TCU) with an adapted Local Interpretable Model-Agnostic Explanations (LIME) pipeline, UESF directly measures and visualizes the reduction in background reliance while promoting focused and interpretable attribution maps, thereby addressing generalization and transparency challenges in sonar image classification (S et al., 1 Dec 2025).

1. Conceptual Formulation and Objectives

UESF is designed to fulfill two key objectives within the bias-unlearning paradigm for sonar object detection:

  • Quantification of background-forgetting: UESF computes a per-pixel “explanation difference” by juxtaposing attribution maps from the baseline classifier and the unlearned model, directly measuring the suppression of seafloor cues in the decision process.
  • Enhancement of attribution faithfulness and localization: Through strategic adaptation of the LIME framework, UESF localizes saliency onto object regions rather than background artifacts, yielding more semantically meaningful and reliable attributions.

Central to UESF's workflow is its tight integration with the TCU module. The latter re-trains the model backbone using a modification of the triplet loss, explicitly treating seafloor images as negatives. This pushes the object embedding space away from background-induced bias, generating an “unlearned” classifier. UESF then applies matched LIME-based explainers to both the baseline and unlearned models, systematically isolating what has been forgotten.

2. Architectural Pipeline

UESF operates through a defined pipeline that processes each sonar image xR224×224x \in \mathbb{R}^{224 \times 224} as follows:

Step Operation Output
1 Input sonar image xx Image
2 Inference via baseline model ff f(x)f(x), feature maps
3 Inference via unlearned model fuf_u fu(x)f_u(x)
4 LIME-based saliency maps extraction Af(x)A_f(x), Afu(x)A_{f_u}(x)
5 Background-feature selection and thresholding Mf,MfuM_f, M_{f_u}
6 Difference mask calculation EfinalE_{\text{final}}
7 Visualization and quantitative reporting Heatmaps, metrics

Key architectural modules include:

  • Feature Selector (Superpixel Generator): Decomposes xx into SS superpixels, serving as atomic units for LIME perturbations.
  • Surrogate Explainer (LIME): Fits a sparse linear model gg for each classifier by generating binary presence indicators over superpixels and regressing model outputs locally.
  • Attribution Aggregator: Projects surrogate coefficients back onto the pixel space and thresholds attributions to generate binary background masks.
  • Difference Calculator: Computes Efinal(x)=clip(Mf(x)Mfu(x),0,1)E_{\text{final}}(x) = \mathrm{clip}\left(M_f(x) - M_{f_u}(x), 0, 1\right), highlighting pixels important to the baseline model but not the unlearned one.

3. Methodological Details

3.1 LIME Adaptation for Comparative Attribution

UESF applies LIME to both ff and fuf_u for a direct, image-localized explanation of unlearning effects. The adapted process involves:

  • Decomposing each input into a shared superpixel map for both models.
  • Generating N=1,000N=1{,}000 perturbed samples {zi}\{z_i\} by randomly masking superpixels.
  • Weighting perturbed samples using an exponential kernel πx(z)=exp(Dist(x,z)2/σ2)\pi_x(z) = \exp(-\mathrm{Dist}(x, z)^2/\sigma^2) with σ\sigma set to $0.25$.
  • Minimizing LIME’s objective:

LLIME(f,g,x)=i=1Nπx(zi)(f(zi)g(zi))2+Ω(g)L_{\mathrm{LIME}}(f, g, x) = \sum_{i=1}^N \pi_x(z_i)(f(z_i) - g(z_i))^2 + \Omega(g)

where Ω(g)\Omega(g) is an L1L_1 penalty.

  • Ensuring superpixel correspondences and consistent perturbation across models.

3.2 Targeted Contrastive Unlearning Loss

TCU leverages a triplet loss:

Ltriplet=i=1Nmax{0,f(xia)f(xip)22f(xia)f(xin)22+α}\mathcal{L}_{\mathrm{triplet}} = \sum_{i=1}^N \max\left\{0, \|f(x_i^a) - f(x_i^p)\|_2^2 - \|f(x_i^a)-f(x_i^n)\|_2^2 + \alpha\right\}

where (xia,xip)(x_i^a, x_i^p) represent anchor and positive pairs from the same object class and xinx_i^n is always a seafloor (background) sample, enforcing a background-geometric separation in the embedding space.

3.3 End-to-End Algorithm

For each input, the process can be summarized as:

  1. Decompose input xx into superpixels.
  2. For m{f,fu}m \in \{f, f_u\}: generate perturbed samples, compute predictions, assign locality weights, fit surrogate gmg_m, recover coefficients.
  3. Project coefficients onto pixel grid: Am(x)A_m(x).
  4. Threshold to obtain Mf,MfuM_f, M_{f_u}.
  5. Difference-mask calculation: Efinal=clip(MfMfu,0,1)E_{\text{final}} = \mathrm{clip}(M_f - M_{f_u}, 0, 1).

4. Evaluation Metrics and Empirical Validation

  • Classification metrics: Accuracy, precision, recall, and F1-score confirm no accuracy loss post-unlearning (both baseline and TCU-unlearned EfficientNet-B0 achieve $0.99$ accuracy).
  • t-SNE embeddings: Visual clusters reveal clear disentanglement of seafloor features from object groups after unlearning, indicating successful bias reduction.
  • Explanation-difference score: EfinalE_{\text{final}}—quantified as the proportion of seafloor-labeled pixels in the difference mask—serves as a metric of bias mitigation: a 35%35\% reduction in background attribution is observed on average.
  • Visualization: Side-by-side heatmaps and difference maps (e.g., Figure 1) illustrate the shift from seafloor-focused to object-centric attribution, with forgotten background features (e.g., seabed textures, shadows) highlighted.

A plausible implication is that EfinalE_{\text{final}} captures model de-biasing effects with fine granularity, although additional metrics (deletion/insertion curves, sparsity indices) could further validate attribution faithfulness.

5. Qualitative Analyses and Interpretability

Visual comparisons demonstrate that, post-unlearning, LIME heatmaps transition from emphasizing seafloor features (ripples, textures, shadow artifacts) to highlighting object characteristics. The difference map pinpoints superpixels predominantly associated with background bias that have ceased to influence the unlearned model's outputs. This affords direct insight into the granularity of model forgetting and substantiates the claim that unlearning is occurring in intended semantic regions.

6. Advantages, Limitations, and Prospects

Advantages:

  • Enables direct, pixel-level interpretability of what has been unlearned, moving beyond black-box re-training assessments.
  • Model-agnostic design: any pair of pre- and post-unlearning classifiers can be analyzed identically within the LIME-based pipeline.
  • Bias mitigation is achieved without loss of detection performance.

Limitations and Recommendations:

  • Reliance on LIME’s perturbation sampling may limit attribution stability; integrating alternative explainability methods (e.g., SHAP, Integrated Gradients) could offer finer or more robust attributions.
  • Key evaluation metrics, such as deletion/insertion curves and attribution sparsity, are not included in this initial study; their adoption is encouraged for comprehensive faithfulness assessment.
  • Superpixel number, kernel width σ\sigma, and threshold τ\tau require re-tuning when porting to new sonar datasets or transferring to other modalities (e.g., medical ultrasound).

This suggests that while UESF currently delivers transparent model unlearning in sonar image analysis, future studies should extend and systematize its attribution and validation toolkit for broader deployment.

UESF addresses a critical challenge in sonar image classification—over-reliance on seafloor context compromising model generalization. By integrating targeted machine unlearning with interpretable explainability, UESF exemplifies a paradigm for robust, bias-aware classifier development in high-stakes imagery domains. Its modular, model-agnostic structure positions it as a reference approach for evaluating and visualizing the effect of deliberate feature forgetting, with potential applicability to other image modalities facing analogous confounding.

For further methodological and implementation details, see (S et al., 1 Dec 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Unlearn to Explain Sonar Framework (UESF).