Papers
Topics
Authors
Recent
2000 character limit reached

Color Diversity Index in Image Analysis

Updated 17 December 2025
  • Color Diversity Index is a quantitative metric that rigorously measures the richness, distinctness, and spatial distribution of colors in digital images using mathematical formulations.
  • It supports applications in generative modeling, automatic colorization, and image retrieval by assessing dominant primary colors, chromatic ratios, and spatial dispersion.
  • Empirical evaluations using RGB-dominant, Chromatic Number Ratio, and D-CDEN variants demonstrate enhanced performance in preserving color diversity across various datasets.

The Color Diversity Index is a class of quantitative metrics designed to rigorously measure the richness, distinctness, and spatial distribution of color in digital images. These indices provide interpretable, mathematically-grounded assessments of image color characteristics, supporting evaluation in generation tasks (e.g., generative models or automatic colorization) as well as in image retrieval and classification pipelines. Recent work has formalized these concepts as explicit indices, most notably within the contexts of image diversity evaluation in generative modeling and color feature extraction in computer vision.

1. Definition and Mathematical Formulation

Contemporary instantiations of the Color Diversity Index are built on precise mathematical foundations. In “Diverse Diffusion: Enhancing Image Diversity in Text-to-Image Generation,” the Color Diversity Index quantifies, on a per-batch basis, the number of distinct dominant colors (Red, Green, Blue) present in a set of generated images (Zameshina et al., 2023). Let bb denote a batch of images, each encoded in RGB space. For image ii:

  • RiR_i, %%%%3%%%%, and BiB_i are the mean intensities of the red, green, and blue channels, respectively.

Dominant color is defined with a scalar threshold K1K \geq 1:

DK(i)={Red,if Ri>Kmax(Gi,Bi) Green,if Gi>Kmax(Ri,Bi) Blue,if Bi>Kmax(Ri,Gi) None,otherwiseD_K(i) = \begin{cases} \text{Red}, & \text{if } R_i > K \cdot \max(G_i, B_i) \ \text{Green}, & \text{if } G_i > K \cdot \max(R_i, B_i) \ \text{Blue}, & \text{if } B_i > K \cdot \max(R_i, G_i) \ \text{None}, & \text{otherwise} \end{cases}

For batch bb, let NK(b)N_K(b) be the number of primary colors appearing as the dominant color in at least one image:

NK(b)=3ib[1I(DK(i)==Red)]ib[1I(DK(i)==Green)]ib[1I(DK(i)==Blue)]N_K(b) = 3 - \prod_{i \in b} [1 - I(D_K(i) == \text{Red})] - \prod_{i \in b} [1 - I(D_K(i) == \text{Green})] - \prod_{i \in b} [1 - I(D_K(i) == \text{Blue})]

where I()I(\cdot) is the indicator. Over a set of batches BB:

  • AvgK(B)=1BbBNK(b)Avg_K(B) = \frac{1}{|B|} \sum_{b \in B} N_K(b) is the mean number of distinct dominant colors per batch.
  • C3K(B)=1BbBI(NK(b)==3)C3_K(B) = \frac{1}{|B|} \sum_{b \in B} I(N_K(b) == 3) is the fraction of batches containing all three colors.
  • C2K(B)=1BbBI(NK(b)2)C2_K(B) = \frac{1}{|B|} \sum_{b \in B} I(N_K(b) \geq 2) is the fraction of batches containing at least two colors.

Alternative metrics include the Chromatic Number Ratio (CNR), introduced in “CCC: Color Classified Colorization” (Gain et al., 3 Mar 2024), and Dynamic Color Distribution Entropy of Neighborhoods (D-CDEN) (Alamdar et al., 2012). CNR focuses on the normalized cardinality of unique color classes present in generated images versus reference images, computed in quantized aba^*b^* LAB color space:

CNR(P,G)=U(CP)U(CG)\mathrm{CNR}(P, G) = \frac{|\mathcal{U}(C_P)|}{|\mathcal{U}(C_G)|}

where CPC_P and CGC_G denote the class-maps (over 215 active color classes in LAB) of the prediction and ground truth, respectively, and U(M)\mathcal{U}(M) is the set of unique class labels in MM.

2. Computation Pipelines

The calculation of the Color Diversity Index varies with the specific metric definition. In the RGB-dominant variant (Zameshina et al., 2023), the steps are:

  1. For each image: Compute channel-wise means (Ri,Gi,Bi)(R_i, G_i, B_i) and assess the dominant color DK(i)D_K(i) via channel comparison and thresholding.
  2. For the batch: Calculate NK(b)N_K(b) using the product-based indicator formulas to determine which of the primary colors occur dominantly.
  3. For collections of batches: Aggregate batchwise scores with AvgK(B)Avg_K(B), C3K(B)C3_K(B), C2K(B)C2_K(B).

For CNR (Gain et al., 3 Mar 2024):

  1. Convert both predicted and reference images from RGB to LAB, extracting a,ba^*, b^* channels.
  2. Quantize a,ba^*, b^* into bins (by default 20×20 grid, collapsed to 215 active classes by k-means and empirical frequency thresholding).
  3. For each image, count the number of unique color classes appearing at least once.
  4. Compute the ratio of unique classes in prediction to unique classes in reference (per Eq. above).

D-CDEN (Alamdar et al., 2012) employs:

  1. Quantization of image colors into C discrete bins (e.g., HSV space).
  2. Detection of all spatially contiguous “neighborhoods” of pixels sharing the same quantized color.
  3. For each color bin, construction of the normalized spatial neighborhood distribution histogram and computation of the Shannon entropy across region sizes.
  4. The final descriptor per image is given by [(hi,Ei)]i=1C[(h_i, E_i)]_{i=1}^C, with EiE_i reflecting spatial color fragmentation.

3. Comparison of Metrics and Interpretation

Color Diversity Indices can be classified by their statistical sensitivities:

  • Richness-based: CNR is purely a species count; it ignores abundances and only considers which color classes are present (Gain et al., 3 Mar 2024). This direct measurement is effective for evaluating whether minor or rare hues are preserved.
  • Dominant primary coverage: The RGB-based index (Zameshina et al., 2023) is tailored for simple but interpretable measurement of whether batches cover the full span of principal color axes, sensitive to dominant but not minor hues.
  • Spatial-dispersion-aware: D-CDEN (Alamdar et al., 2012) encodes not only the abundance of each color but also its spatial distribution, capturing whether colors are clustered or fragmented.

Metrics such as Shannon entropy, Simpson’s index, Gini-Simpson, and Rényi entropy explicitly consider the distributional evenness of color classes, providing orthogonal information to mere richness (Gain et al., 3 Mar 2024).

Metric Input Basis Sensitivity
Color Diversity Index (RGB) Mean RGB per image Dominant class
Chromatic Number Ratio Quantized LAB palette Richness
D-CDEN Quantized bin, spatial Color, locality

4. Empirical Results and Applications

Applications have spanned generative model evaluation, colorization, image retrieval, and classification.

For text-to-image generation, “Diverse Diffusion” employing the “pooling_cap” variant produced a multiplicative improvement in the batch fractions containing two or three distinct dominant colors, often exceeding a 2.5× gain under stricter dominance thresholds (K=1.1K=1.1) and smaller batch sizes (Zameshina et al., 2023). In all tested regimes, the method did not sacrifice outputs already displaying high color coverage.

In automatic colorization, CCC’s CNR yielded maximal richness scores across datasets (ADE 1.90 vs 1.25, CelebA 1.33 vs 1.07, COCO 3.53 vs 2.89, Oxford Flowers 0.96 vs 0.88, ImageNet 1.13 vs 0.94), indicating both superior hue recovery and plausible introduction of minor hues beyond those in the ground truth (Gain et al., 3 Mar 2024).

For image retrieval and classification, D-CDEN augmented traditional histogram descriptors, increasing Precision@Recall by 5–10 percentage points in scene retrieval experiments on SIMPLIcity and Caltech101 (Alamdar et al., 2012).

5. Integration and Practical Implementation

The Color Diversity Index is typically used post-hoc as an evaluation metric. In “Diverse Diffusion,” it was not included directly in the latent sampling objective but rather used to validate latent-space selection procedures that maximize diversity by mutual distance (Zameshina et al., 2023). In CCC, CNR was computed alongside standard metrics (MSE, FID, SSIM, LPIPS, etc.) to directly gauge palette richness preservation (Gain et al., 3 Mar 2024).

Spatially-aware approaches like D-CDEN are implemented by raster-scan neighborhood labeling (with union-find) and standard entropy computation. Practical pipeline optimizations include downsampling for noise reduction, GPU-based vectorized color binning, and empirical data-driven label reduction.

6. Limitations and Considerations

Interpretation of Color Diversity Indices must account for the chosen quantization, aggregation, and spatial strategy. RGB-dominant indices capture only strong dichromatism, missing subtle or localized hues. Richness scores such as CNR are robust to class imbalance but do not reflect coverage uniformity; indices based on entropy or spatial fragmentation add complementary granularity but are more sensitive to quantization artifacts or noise (Alamdar et al., 2012). Data-driven class collapsing and batch-level normalization are advised to ensure interpretability across datasets with varying inherent color distributions (Gain et al., 3 Mar 2024).

7. Relation to Broader Image Diversity Research

Color diversity metrics are part of a broader family of diversity and coverage-orientation metrics for image evaluation, including perceptual similarity (e.g., LPIPS), demographic representation, and structural features (Zameshina et al., 2023). Their principal value is in quantifying diversity in a manner easily implementable, robust to pixelwise distortions, and interpretable across model classes and datasets. Their use is increasingly central in benchmarking generative and discriminative models where color variety is a critical factor.

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Color Diversity Index.