Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 160 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 33 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 184 tok/s Pro
GPT OSS 120B 434 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

AutoCas: Autoregressive Cascade Predictor in Social Networks via Large Language Models (2502.18040v1)

Published 25 Feb 2025 in cs.SI and cs.AI

Abstract: Popularity prediction in information cascades plays a crucial role in social computing, with broad applications in viral marketing, misinformation control, and content recommendation. However, information propagation mechanisms, user behavior, and temporal activity patterns exhibit significant diversity, necessitating a foundational model capable of adapting to such variations. At the same time, the amount of available cascade data remains relatively limited compared to the vast datasets used for training LLMs. Recent studies have demonstrated the feasibility of leveraging LLMs for time-series prediction by exploiting commonalities across different time-series domains. Building on this insight, we introduce the Autoregressive Information Cascade Predictor (AutoCas), an LLM-enhanced model designed specifically for cascade popularity prediction. Unlike natural language sequences, cascade data is characterized by complex local topologies, diffusion contexts, and evolving dynamics, requiring specialized adaptations for effective LLM integration. To address these challenges, we first tokenize cascade data to align it with sequence modeling principles. Next, we reformulate cascade diffusion as an autoregressive modeling task to fully harness the architectural strengths of LLMs. Beyond conventional approaches, we further introduce prompt learning to enhance the synergy between LLMs and cascade prediction. Extensive experiments demonstrate that AutoCas significantly outperforms baseline models in cascade popularity prediction while exhibiting scaling behavior inherited from LLMs. Code is available at this repository: https://anonymous.4open.science/r/AutoCas-85C6

Summary

  • The paper introduces AutoCas, a novel framework that leverages autoregressive LLMs to predict information cascades in social networks with enhanced accuracy.
  • The paper employs a unique cascade tokenization method, combining local and global embeddings with prompt learning to model dynamic diffusion patterns.
  • The paper demonstrates significant performance improvements over traditional methods, reducing prediction errors such as MSLE and MAPE across various social platforms.

AutoCas: Autoregressive Cascade Predictor in Social Networks via LLMs

Introduction

The paper introduces AutoCas, a framework employing LLMs to predict information cascade popularity in social networks. AutoCas capitalizes on the autoregressive modeling capabilities of LLMs to handle the sequential and dynamic nature of information diffusion across vast and complex networks. This model is particularly relevant in the domain of social computing, where predicting the spread and popularity of information has significant applications in marketing, misinformation control, and content recommendation.

Cascade Popularity Prediction

AutoCas addresses the challenge of modeling information cascades, defined as the process through which information disseminates through a network of users. Traditional methods often fall short due to the intricate, evolving dynamics of user interactions and content diffusion. AutoCas reformulates these diffusion processes using autoregressive modeling, recognizing the sequential dependencies characteristic of cascades. This approach aligns well with LLMs, which excel at predicting subsequent elements in a sequence based on historical data. Figure 1

Figure 1: Illustration of cascade popularity prediction. Given an information item, the target is to model its diffusion process and predict how many users will be aware of it.

Methodology

1. Cascade Tokenization

One of the key innovations in AutoCas is the tokenization of cascade data. Unlike natural language, cascade data involves complex interactions that require special treatment. AutoCas first extracts local embeddings from the cascade graph and global embeddings from the broader network context. These embeddings are fused to create cascade token sequences which summarize the propagation state at various time periods within the cascade timeline. Figure 2

Figure 2: The motivation of adapting LLMs for cascade modeling, illustrating the parallels between text generation and cascade diffusion in sequence modeling.

2. Autoregressive Modeling

AutoCas leverages the autoregressive framework by using projected cascade tokens as input to a fixed pre-trained LLM. The LLM's parameters remain frozen, preserving its learning capacity while allowing AutoCas to efficiently adapt to the task through lightweight layers on top of the model. This design choice minimizes computational costs and optimizes scalability, as it doesn't require retraining the entire model for different cascade instances or prediction horizons. Figure 3

Figure 3: The overview framework of AutoCas, highlighting local and global embeddings extraction, autoregressive modeling, and prompt integration.

3. Prompt Learning

To enhance the LLM's adaptability, AutoCas incorporates prompt learning, tailoring textual prompts that guide the LLM in understanding task-specific contexts. These prompts are informed by the temporal and structural specifics of the cascade data, ensuring that the model's predictions are grounded in relevant context-driven cues.

Experimental Evaluation

AutoCas was evaluated on datasets from platforms like Weibo, Twitter, and APS. The results demonstrated significant performance improvements over existing methods, with marked reductions in prediction error metrics such as MSLE and MAPE. AutoCas exhibited scalability, benefiting from the underlying LLM as model parameters increased, aligning with the scaling laws observed in language modeling. Figure 4

Figure 4

Figure 4

Figure 4: Performance and training efficiency comparison of AutoCas with different LLMs, assessing scalability across datasets.

Conclusion

The integration of LLMs into cascade modeling presents a robust approach to dealing with the complexities of information diffusion in social networks. AutoCas leverages the sequential nature of cascades and the autoregressive capabilities of LLMs, offering a sophisticated tool for popularity prediction tasks. Future research may explore broader applications of this approach, potentially extending beyond social networks into other domains requiring sequential prediction capabilities.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.