Papers
Topics
Authors
Recent
2000 character limit reached

HatePrototypes: Interpretable Hate Speech

Updated 16 November 2025
  • HatePrototypes are class-level vector representations built by averaging LM hidden states to detect hate speech across domains.
  • They enable efficient cross-domain transfer and parameter-free early exiting using a prototype-gap margin criterion to maintain robust macro-F1 scores.
  • This approach provides an interpretable, data-efficient solution for both explicit and implicit hate speech detection with practical deployment benefits.

HatePrototypes are a family of class-level vector representations designed to provide interpretable, efficient, and transferable mechanisms for detecting both explicit and implicit hate speech in natural language content. This approach leverages prototype centroids computed directly from hidden states of fine-tuned LMs, enabling cross-domain transferability and parameter-free early exiting, thereby simplifying deployment and adaptation across varying moderation tasks (Proskurina et al., 9 Nov 2025).

1. Formal Definition and Prototype Construction

Given a labeled hate-speech corpus D={(xi,yi)}i=1N\mathcal{D} = \{(x_i, y_i)\}_{i=1}^N with yi{0,1}y_i \in \{0, 1\} for non-hate and hate classes, HatePrototypes are constructed as follows. Let h()(x)Rdh^{(\ell)}(x) \in \mathbb{R}^{d} denote the hidden representation of input xx at layer \ell, where dd is the LM hidden size.

For each class cc and layer \ell, the prototype centroid is

μc()=1Dc(x,y)Dch()(x)\mu^{(\ell)}_c = \frac{1}{|\mathcal{D}_c|} \sum_{(x, y) \in \mathcal{D}_c} h^{(\ell)}(x)

where Dc\mathcal{D}_c is the subset of inputs in class cc.

At inference, both the sample representation and prototypes are 2\ell_2-normalized: h~()(x)=h()(x)h()(x)2,μ~c()=μc()μc()2\tilde{h}^{(\ell)}(x) = \frac{h^{(\ell)}(x)}{\|h^{(\ell)}(x)\|_2}, \quad \tilde{\mu}^{(\ell)}_c = \frac{\mu^{(\ell)}_c}{\|\mu^{(\ell)}_c\|_2} Classification is performed by the dot product similarity: sc()(x)=h~()(x),μ~c()s^{(\ell)}_c(x) = \langle \tilde{h}^{(\ell)}(x), \tilde{\mu}^{(\ell)}_c \rangle

Unlike contrastive learning or clustering-based approaches, HatePrototypes do not introduce learned parameters or a prototype-alignment loss; prototypes are computed by averaging fine-tuned LM activations.

2. Data Efficiency: Minimal Prototype Sets

Empirical results demonstrate that prototypes remain robust even when constructed from very few examples. When using only k50k \approx 50 labeled samples per class, macro-F1 saturates and deviates generically by less than 2 pp from using $500$ samples per class. Prototype selection entails random sampling, centroid calculation, and variance estimation via repetition over draws.

No additional regularization is necessary beyond normalization; the averaging suppresses noise, and degradation only becomes notable below k<20k < 20 per class.

3. Transferability Across Explicit and Implicit Hate Benchmarks

HatePrototypes support transfer across benchmarks targeting explicit (surface-level abusive terms, targeted slurs) and implicit hate (demeaning comparisons, exclusionary suggestions, disguised violence). Two key transfer scenarios:

  • Cross-domain: Using prototype centroids from one dataset with a model fine-tuned on another, e.g., classifying SBIC test data using SBIC prototypes and an OPT model fine-tuned on HateXplain.
  • Prototype-based transfer: Using prototypes computed from dataset YY to classify dataset XX samples with a classifier trained on XX.

Transfer efficiency is measured by the relative macro-F1 ratio

F1(Xproto(Y))F1(Xproto(X))\frac{F1(X \mid \mathrm{proto}(Y))}{F1(X \mid \mathrm{proto}(X))}

High-probability transfer is observed: BERT-based HatePrototypes retain >90% F1 across implicit/explicit benchmarks such as IHC, SBIC, OLID, and HateXplain. OPT-based transfer is less robust, with drops to 64%\sim64\% in challenging pairs (notably IHC\leftarrowSBIC).

A plausible implication is that prototype averaging over LM feature space captures generic semantic features relevant to hate speech, permitting interchangeable use across datasets with differing hate type distributions.

4. Parameter-Free Early Exiting

Efficiency is enhanced via a prototype-gap margin criterion enabling early exit before full LM forward pass. For each layer \ell, define margin

m()(x)=s(1)()(x)s(2)()(x)m^{(\ell)}(x) = s^{(\ell)}_{(1)}(x) - s^{(\ell)}_{(2)}(x)

where s(1)s_{(1)} and s(2)s_{(2)} are the highest and next-highest prototype similarities. The exit rule is to stop at the lowest layer ^\hat{\ell} where

m()^(x)δm^{(\hat{\ell)}(x)} \geq \delta

for fixed threshold δ\delta; otherwise, use the full model. This approach requires no additional learnable parameters.

Experiments reveal an average reduction of 20%\sim20\% in forward-pass layers with negligible F1 impact. Compared to entropy-based (DeeBERT) and patience-based (PABEE) gating, prototype-gap exiting matches or surpasses performance, particularly on explicit hate tasks.

Thresholds for δ\delta are task-dependent: for explicit HateXplain, δ0.05\delta\approx0.05; for implicit SBIC, δ0.125\delta\approx0.125.

5. Quantitative Performance and Experimental Setup

Models evaluated include BERT-base, OPT-125M, LLaMA-Guard-1B, and BLOOMZ-Guard-3B on IHC (implicit), SBIC (implicit), OLID (explicit), and HateXplain (explicit). Standard LM fine-tuning is performed for three epochs, at 1×1051\times10^{-5} learning rate, batch size 64. Prototype construction uses training splits, with up to 500 samples per class.

Key results:

  • Cross-domain F1 increases up to +28 pp (e.g., BERT: HateXplain\rightarrowSBIC: +28.02 F1).
  • In-domain and cross-domain pairs: prototype-based classification matches or slightly exceeds fine-tuned head performance.
  • Guard models improve on explicit hate detection when swapping to HatePrototypes, e.g., LLaMA-Guard accuracy on OLID increases from 46.9% to 71.3%.
  • Early exiting achieves same F1 as full-model with up to 1.5× speed-ups.

The approach runs on a single NVIDIA A100 (80 GB) and demonstrates negligible deployment constraints given its parameter-free inference.

6. Qualitative and Error Analysis

Error analysis on IHC categories identifies incitement (disguised calls for violence or solidarity) and irony (question-answer riddles with discriminatory encoding) as the most challenging for cross-domain transfer. For example, accuracy drops to 40–58% for incitement if prototypes do not encode implicit hate concept geometry.

Layer-wise analysis demonstrates that implicit hate samples require deeper semantic processing, exiting at layers 9–12, while explicit hate can often be detected before layer 8 with stable margins.

Low prototype-similarity cases often indicate out-of-distribution, adversarial, or under-represented examples. This suggests potential use of prototype similarity as an uncertainty signal for active learning or ambiguous annotation surfacing.

7. Limitations and Directions for Future Research

Prototype-gap early exiting may degrade out-of-domain performance without careful δ\delta tuning. A per-layer threshold schedule could mitigate this issue. Prototype-based transfer lags the performance of fully fine-tuned heads on harder domain pairs; learning a small alignment head atop static prototypes is a plausible avenue.

Implicit hate datasets suffer from low inter-annotator agreement, impacting prototype quality. More granular annotation would directly benefit the approach. HatePrototypes may be suited for active-learning pipelines, highlighting ambiguous or atypical examples for further annotation.

Overall, HatePrototypes offer a parameter-free, data-efficient, and interpretable solution for detecting and transferring both explicit and implicit hate speech, enabling practical deployment and insight into LM decision boundaries without repeated re-training or contrastive learning (Proskurina et al., 9 Nov 2025).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to HatePrototypes.