Dice Question Streamline Icon: https://streamlinehq.com

Public evidence on retention effects of item-level survey-based ranking

Establish whether publicly available empirical evidence demonstrates that incorporating predicted item-level survey responses into content ranking algorithms increases long-term user retention metrics (e.g., views, sessions) and clarify how this effect depends on survey wording and the specific retention definition over different time scales.

Information Square Streamline Icon: https://streamlinehq.com

Background

In Section 5.4, the authors discuss internal observations from multiple platforms suggesting that predictions of item-level survey responses can correlate with long-term retention, while noting variability based on survey wording and retention definitions. However, they explicitly state a lack of clear public evidence verifying these effects.

This creates a concrete gap between internal platform findings and publicly documented, reproducible evidence, motivating the need to establish and publish empirical results that quantify the retention impact of using item-level survey predictions in ranking.

References

Multiple platforms reported that at least one of their item-level survey prediction models correlated with long-term retention. However, they noted that not all surveys have done so and that this depends both on the survey wording and the definition of retention (e.g. views vs. sessions, and over different time scales.) We do not know of any clear public evidence on this point.

What We Know About Using Non-Engagement Signals in Content Ranking (2402.06831 - Cunningham et al., 9 Feb 2024) in Section 5.4 (Item-Level Surveys)