Exact baseline retrieval accuracy of the K-winner Modern Hopfield Network

Determine the exact baseline retrieval accuracy of the K-winner Modern Hopfield Network when probed with untrained pseudo-memory patterns in the random sparse binary setting, providing a full analytical characterization of this baseline performance.

Background

The paper compares a distributed K-winner Modern Hopfield Network (MHN) to the original 1-winner MHN under continual learning with random sparse binary patterns. For the original MHN, the authors provide a theoretical analysis showing exponential decay of retention and characterize the pseudo-memory baseline analytically.

In contrast, for the K-winner MHN the authors note that the exact baseline retrieval accuracy against untrained pseudo-patterns has not been analytically derived. They discuss intuitive factors that may contribute to its higher baseline (e.g., partial connectivity and aggregation over multiple winning hidden units), but a complete formal treatment is missing. Establishing this analysis would clarify how the K-winner architecture’s parameters influence baseline performance relative to the original MHN.

References

We do not have a full analysis of the exact baseline retrieval accuracy of the K-winner MHN. However, we can offer an intuitive characterization of some factors contributing to its higher baseline accuracy (as compared to the original MHN).

Neural Computation Without Slots: Steps Towards Biologically Plausible Memory and Attention in Natural and Artificial Intelligence  (2511.04593 - Bhandarkar et al., 6 Nov 2025) in Supplementary Information (SI), Section "The K-winner MHN", Subsection "Baseline Retrieval Accuracy and Metrics for Memory Performance with Random Patterns", Subsubsection "Baseline Accuracy"