Dice Question Streamline Icon: https://streamlinehq.com

Sufficiency of metrics-only approaches to address representation biases

Determine whether alternative metrics for comparing neural representations between systems can, on their own, address the challenges posed by representation biases in learned feature representations that distort cross-system comparisons.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper documents systematic biases in learned neural representations: simpler or earlier-learned features tend to dominate variance, leading analyses such as PCA, regression, and RSA to overemphasize these features despite comparable computational roles for more complex features. These biases cause representational comparisons to misalign with functional similarity, e.g., multitask models appearing more similar to easy-task-only models than to hard-task-only models that compute the same functions.

Given these issues, the authors consider whether changing the metrics used for representational comparison could solve the problem. They note theoretical and empirical convergence among common metrics and highlight that even alternatives like cosine similarity or soft matching distance remain sensitive to variance-driven effects. They state explicitly that it is not clear metrics alone provide a ready solution, motivating the open question of whether any purely metric-based approach can resolve these biases.

References

It natural to ask whether alternative metrics for comparing systems could address some of these challenges. While considering a range of metrics is a generally good practice, it is not clear that there is a ready solution from metrics alone.

Representation biases: will we achieve complete understanding by analyzing representations? (2507.22216 - Lampinen et al., 29 Jul 2025) in Discussion, subsection "What are the potential solutions?"