Papers
Topics
Authors
Recent
2000 character limit reached

Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content

Published 9 May 2024 in cs.CY, cs.LG, stat.ME, cs.HC, and cs.IR | (2405.05596v1)

Abstract: Most modern recommendation algorithms are data-driven: they generate personalized recommendations by observing users' past behaviors. A common assumption in recommendation is that how a user interacts with a piece of content (e.g., whether they choose to "like" it) is a reflection of the content, but not of the algorithm that generated it. Although this assumption is convenient, it fails to capture user strategization: that users may attempt to shape their future recommendations by adapting their behavior to the recommendation algorithm. In this work, we test for user strategization by conducting a lab experiment and survey. To capture strategization, we adopt a model in which strategic users select their engagement behavior based not only on the content, but also on how their behavior affects downstream recommendations. Using a custom music player that we built, we study how users respond to different information about their recommendation algorithm as well as to different incentives about how their actions affect downstream outcomes. We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes." For example, participants who are told that the algorithm mainly pays attention to "likes" and "dislikes" use those functions 1.9x more than participants told that the algorithm mainly pays attention to dwell time. A close analysis of participant behavior (e.g., in response to our incentive conditions) rules out experimenter demand as the main driver of these trends. Further, in our post-experiment survey, nearly half of participants self-report strategizing "in the wild," with some stating that they ignore content they actually like to avoid over-recommendation of that content in the future. Together, our findings suggest that user strategization is common and that platforms cannot ignore the effect of their algorithms on user behavior.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. Adomavicius G, Tuzhilin A (2005) Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE transactions on knowledge and data engineering 17(6):734–749.
  2. Anderson BAaM (2021) Social Media Use in 2021. URL https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/.
  3. Arriagada A, Ibáñez F (2020) “you need at least one picture daily, if not, you’re dead”’: content creators and platform evolution in the social media ecology. Social Media+ Society 6(3):2056305120944624.
  4. BrĂĽckner M, Scheffer T (2009) Nash equilibria of static prediction games. Advances in neural information processing systems 22.
  5. DeVito MA (2021) Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proc. ACM Hum.-Comput. Interact. 5(CSCW2):1–38.
  6. DeVito MA, Gergle D, Birnholtz J (2017) “algorithms ruin everything”: #riptwitter, folk theories, and resistance to algorithmic change in social media. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 3163–3174, CHI ’17 (New York, NY, USA: Association for Computing Machinery), ISBN 9781450346559, URL http://dx.doi.org/10.1145/3025453.3025659.
  7. Edelman B, Ostrovsky M (2007) Strategic bidder behavior in sponsored search auctions. Decision support systems 43(1):192–198.
  8. Marshall A (2020) Uber changes its rules, and drivers adjust their strategies. Wired .
  9. Narayanan A (2022) How to train your TikTok. https://knightcolumbia.org/blog/how-to-train-your-tiktok, accessed: 2023-11-10.
  10. Narayanan A (2023) Understanding social media recommendation algorithms. https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms, accessed: 2023-11-10.
  11. Rahman HA (2021) The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Administrative science quarterly 66(4):945–988.
  12. Simpson E, Hamann A, Semaan B (2022) How to tame “your” algorithm: LGBTQ+ users’ domestication of TikTok. Proc. ACM Hum.-Comput. Interact. 6(GROUP):1–27.
  13. Staff W (2021) Inside TikTok’s Algorithm: A WSJ Video Investigation. Wall Street Journal ISSN 0099-9660, URL https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477.
  14. Taylor SH, Choi M (2022) An initial conceptualization of algorithm responsiveness: Comparing perceptions of algorithms across social media platforms. Social Media + Society 8(4):20563051221144322.
  15. WSJ (2021) URL https://www.wsj.com/video/series/inside-tiktoks-highly-secretive-algorithm/investigation-how-tiktok-algorithm-figures-out-your-deepest-desires/6C0C2040-FF25-4827-8528-2BD6612E3796.
Citations (3)

Summary

  • The paper demonstrates that user interactions are strategically altered based on how recommendation algorithms are described, with significant differences in likes and skips confirmed experimentally.
  • The study used a custom-built music player to test how varied algorithm transparency influences behavior, revealing that anticipated future recommendations drive increased engagement and faster decision-making.
  • These findings imply that platforms must account for strategic user behavior when designing recommendation systems, as data collected under one algorithm may not be comparable after changes.

Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content

Introduction

Have you ever noticed that your behavior changes when you think someone is watching or taking notes? It turns out that the way we interact with recommendation systems—like those on Spotify, TikTok, or Netflix—might also change when we believe our behavior can influence what we see next. The paper "Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content" explores this exact phenomenon. The authors discuss how users strategically adapt their behaviors to manipulate what content is recommended to them, and they experimentally test this idea using a custom-built music player.

Key Findings

Differences in User Behavior

One of the main findings of the paper is that users alter their behavior based on their understanding of how the recommendation algorithm works. The researchers assigned participants to different groups where each group received different information regarding how their interactions would be used by the algorithm. Remarkably, even minor changes in the description significantly altered user behavior:

  • Likes Information Condition: Participants told that the algorithm mainly considers "likes" and "dislikes" submitted 2.9 more of these actions, on average, than those who received no specific information.
  • Dwell Information Condition: Participants informed that the algorithm focuses on "dwell time" (how long they spent on each song) exhibited 3.0 fewer likes or dislikes compared to the control group.

Such evidence contradicts the naive assumption that user interactions with recommended content are consistent regardless of the recommendation algorithm.

Influence of Future Incentives

The study also found that users showed different interaction patterns when they believed their current behaviors would affect future outcomes, akin to planning for better future recommendations:

  • Participants expecting personalized recommendations after their interactions ("Treatment" condition) had significantly higher engagement metrics, including 3.6 more "likes" and "dislikes" and 3 more fast skips.
  • These participants also exhibited faster decision-making, as indicated by shorter average dwell times.

Practical Implications

For platforms, these insights could be vital. A major implication is that data collected under one algorithm might not be directly transferable or comparable to another. This can lead to potential misjudgments if platforms change their recommendation algorithms without accounting for strategic user behavior.

Moreover, users who understand how algorithms work might be nudging their recommendations in more effective (or sometimes less useful) directions. For instance, a Spotify user might skip through songs quickly to train the system more efficiently, potentially sacrificing immediate enjoyment for better long-term recommendations.

Theoretical Implications and Future Directions

This research adds to the understanding of human-algorithm interactions by showing users are not passively engaging with content. Instead, there's a dynamic game at play, where users actively try to influence the system. This takes us beyond traditional notions of measuring user satisfaction through engagement alone.

Future research could explore how these findings translate to other settings, such as news recommendations or social media feeds. Platforms might also explore more transparent ways of communicating how their algorithms work to improve user experience while keeping manipulability in check.

Reflection on User Survey Insights

Beyond the experimental part, the research team surveyed participants to understand how often they consciously try to influence their recommendations on real-world platforms. About 47% of respondents confirmed they do this, showcasing a broad awareness among users about their ability to shape their algorithmic interactions.

Participants often mentioned behaviors like using "Incognito mode" to hide interests and creating multiple accounts for different content types. Some participants reported avoiding actions that might pigeonhole them into a narrow range of recommendations, such as daily popping up of similar music tracks or videos.

Conclusion

This paper sheds light on a relatively unexplored aspect of user-algorithm interaction: strategization. The robust evidence shows users adapt their behaviors not just based on personal preferences but also their understanding of how algorithms work. For data scientists and platform designers, these insights are invaluable for creating more user-friendly and robust recommendation systems. As algorithms continue to dominate our online experiences, understanding the nuanced ways users interact with them will become increasingly important.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 89 likes about this paper.