Papers
Topics
Authors
Recent
Search
2000 character limit reached

Dense Cosine Similarity Maps Overview

Updated 10 February 2026
  • Dense Cosine Similarity Maps are a dense, detector-free representation that computes pixel-wise cosine similarities using ℓ₂-normalized descriptors.
  • The method employs a fully convolutional ResNet-based architecture to extract per-pixel descriptors, enabling precise correspondence estimation between images.
  • Contrastive training with synthetic augmentations ensures robust matching under geometric and photometric distortions, outperforming traditional keypoint-based methods.

Dense Cosine Similarity Maps (DCSMs) provide a fully dense, detector-free representation of pixelwise correspondence between images by leveraging local descriptors and the cosine similarity measure. DCSMs support robust dense image matching under diverse geometric and photometric distortions, obviating the need for explicit keypoint detection. They are constructed by extracting ℓ₂-normalized descriptors at every pixel using a convolutional neural network and computing the cosine similarity for every possible pair of spatial locations across source and target images, enabling fine-grained pixel-level correspondence estimation in challenging visual conditions (Kwiatkowski et al., 2024).

1. Descriptor Network Architecture

The central component for generating DCSMs is a fully convolutional deep network with the following structure:

  • Input: RGB image xR3×H×Wx \in \mathbb{R}^{3 \times H \times W}.
  • Backbone: A compact ResNet-style architecture with:
    • Initial 3×33 \times 3 convolution (stride 1).
    • Ten residual blocks (each comprising two 3×33 \times 3 convolutions, batch normalization, and ReLU, all strides 1).
    • Final 1×11 \times 1 convolution to yield dd channels (d=128d = 128).
  • Output: Dense feature map fθ(x)Rd×H×Wf_\theta(x) \in \mathbb{R}^{d \times H \times W}, preserving the full spatial resolution.
  • Receptive field: Approximately 43×4343 \times 43 pixels as dictated by the convolutional stack.
  • Per-pixel descriptors: For a spatial location (x,y)(x, y) in image ii, the descriptor is fi(x,y)=fθ(xi)[:,y,x]Rdf_i(x, y) = f_\theta(x_i)[:, y, x] \in \mathbb{R}^d.

This architecture ensures that each pixel in the input image is associated with a descriptor that summarizes information from its local context but maintains spatial correspondence with the input (Kwiatkowski et al., 2024).

2. Construction and Definition of DCSMs

Given two images, x1x_1 and x2x_2, and their extracted dense feature maps, f1Rd×H×Wf_1 \in \mathbb{R}^{d \times H \times W} and f2Rd×H×Wf_2 \in \mathbb{R}^{d \times H \times W}, the DCSM SS assigns to each pair of pixels ((x,y),(x,y))((x, y), (x', y')) the cosine similarity between their descriptors: S((x,y),(x,y))=f1(x,y),f2(x,y)f1(x,y)2f2(x,y)2S((x, y), (x', y')) = \frac{\langle f_1(x, y), f_2(x', y') \rangle}{\| f_1(x, y) \|_2 \, \| f_2(x', y') \|_2} All descriptors are 2\ell_2-normalized prior to dot product computation, so S[1,1]S \in [-1,1]. The resulting similarity tensor encodes a dense correspondence likelihood for every location pair between the two images. The DCSM supports fully detector-free matching, relying solely on per-pixel features (Kwiatkowski et al., 2024).

3. Contrastive Training of Dense Descriptors

Dense descriptors are optimized by contrastive learning under strong geometric perturbations:

  • Positive pair sampling: For each training batch, a uniform grid of NN points {pi}i=1N\{p_i\}_{i=1}^N is sampled in image AA, projected to image BB under ground-truth homography H\mathcal{H}, followed by small random spatial jitter.
  • Descriptor sampling: Features at corresponding (possibly non-integer) grid locations are extracted via differentiable bilinear sampling (Spatial Transformer mechanism).
  • Similarity matrix: For the descriptor sets {ai}\{a_i\} from image AA and {bj}\{b_j\} from image BB, the similarity matrix is Sij=ai,bjS_{ij} = \langle a_i, b_j \rangle.
  • Bi-directional InfoNCE loss (CLIP-style): Compute softmax distributions over matrix rows and columns:

pA(i,j)=exp(Sij)kexp(Sik),pB(i,j)=exp(Sij)kexp(Skj)p_A(i, j) = \frac{\exp(S_{ij})}{\sum_k \exp(S_{ik})}, \quad p_B(i, j) = \frac{\exp(S_{ij})}{\sum_k \exp(S_{kj})}

Define primary and dual cross-entropy losses:

LA=1Ni=1NlogpA(i,i),LB=1Ni=1NlogpB(i,i)L_A = -\frac{1}{N} \sum_{i=1}^N \log p_A(i, i), \quad L_B = -\frac{1}{N} \sum_{i=1}^N \log p_B(i, i)

Total loss:

L=LA+LB2L = \frac{L_A + L_B}{2}

Minimizing LL jointly promotes high cosine similarity for true correspondences and low similarity for all other pairs (Kwiatkowski et al., 2024).

4. Synthetic Augmentations and Regularization

Training utilizes a synthetic data pipeline (SIDAR) to maximize descriptor invariance:

  • Augmentation types: Perspective warps, occlusions, shadows, specular reflections, and complex illumination changes are extensively applied to image pairs.
  • Grid jitter: Random offset is added to grid positions, mitigating overfitting to fixed pixel locations and injecting spatial regularization.
  • Negative sampling: Explicit mining is unnecessary; all non-corresponding grid locations in each batch serve as negatives and are included via the InfoNCE setup.

This approach enforces robustness to a wide range of appearance and geometric transformations during matching, setting the method apart from earlier approaches that rely on real-world pairs or limited augmentation strategies (Kwiatkowski et al., 2024).

5. Quantitative Evaluation and Comparative Results

Performance is evaluated on 4,000 image pairs with ground-truth homographies spanning undeformed and strongly distorted scenarios. The following protocols and metrics are employed:

  • Procedure:
    • Estimate dense correspondences \rightarrow RANSAC \rightarrow estimate recovered homography H^\hat{H}.
    • Quantify error using Mean Corner Error (MCE):

    MCE(H,H^)=k=14HxkH^xk2\mathrm{MCE}(H, \hat{H}) = \sum_{k=1}^4 \| H x_k - \hat{H} x_k \|_2

    where xkx_k are image corners. - Measure pointwise reprojection error and count inliers at thresholds t{0.1,1,10}t \in \{0.1, 1, 10\} px.

  • Outcomes:

    • At 4 px grid sampling, ConDL achieves sub-pixel accuracy on a larger fraction of pairs than SuperGlue or LoFTR under strong distortions.
    • Increasing sampling density to every 2 px yields more potential matches, but introduces more outliers and hence requires additional RANSAC iterations.
    • Even under heavy distortion, 2ℓ_2-normalized descriptors trained with SIDAR augmentations outperform all classical descriptors such as SIFT, though SIFT remains competitive (Kwiatkowski et al., 2024).

6. Implementation Details and Practical Considerations

Key practical factors underlying the deployment and training of DCSMs in ConDL are summarized below:

Component Setting/value Notes
Descriptor Network ResNet-10 (128 channels, batch-norm, ReLU) Fully convolutional
Sampling Grid 16×1616 \times 16 (N=256) per image during training Adjustable at inference for spatial coverage/control
Optimizer Adam, lr=103lr = 10^{-3}, β1=0.9\beta_1=0.9, β2=0.999\beta_2=0.999, ϵ=108\epsilon=10^{-8}
Training Schedule 500 epochs (\approx60 h on NVIDIA RTX A6000, 48 GB)
Batch Size 16 image pairs
Negative Pairs All non-diagonal grid pairs within each batch No explicit negative mining

The all-pairs similarity is highly parallelizable and suited for modern GPU computation. The spatial invariance properties stem in part from aggressive synthetic augmentations and the differentiable spatial sampling process (Kwiatkowski et al., 2024).

7. Relationship to Broader Dense Matching and Descriptor Frameworks

DCSMs, as implemented in ConDL, represent a shift from detector-based and sparse-matching regimes to fully dense, learning-based correspondence estimation in the presence of extreme geometric and appearance variability. Unlike previous matching frameworks that depend on keypoint detectors (e.g., SIFT, SuperGlue), DCSMs enable per-pixel correspondence without explicit detection or pre-filtering, drawing on lessons from contrastive learning and synthetic augmentation pipelines. The adoption of a bi-directional InfoNCE loss parallels CLIP and related contrastive frameworks. The result is dense feature maps with strong invariance properties, enabling robust matching even under photometric and geometric disruptions typically challenging for traditional approaches (Kwiatkowski et al., 2024).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Dense Cosine Similarity Maps (DCSMs).