Dice Question Streamline Icon: https://streamlinehq.com

Adapting xLSTM’s internal query mechanism for Mem-RPE

Investigate whether modifying the xLSTM architecture to directly connect its internal query mechanism to the goal image improves performance on the Mem‑RPE task compared to using xLSTM as a plug‑and‑play sequence model.

Information Square Streamline Icon: https://streamlinehq.com

Background

In comparisons on the Mem‑RPE task, the xLSTM baseline underperforms GRU despite being used as a plug‑and‑play sequence model with a generic decoder.

The authors hypothesize that better coupling between xLSTM’s internal querying and the goal image could enhance performance.

References

The more recent xLSTM performed less well than a GRU, but was used as a plug-n-play sequence model. We conjecture that performance could be optimized further by opening the black box and making the internal query mechanism connect to the goal image more directly.

Kinaema: a recurrent sequence model for memory and pose in motion (2510.20261 - Sariyildiz et al., 23 Oct 2025) in Section 5, Mem-RPE performance