Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction (2406.10811v1)

Published 16 Jun 2024 in cs.CL, cs.AI, and cs.CE
LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction

Abstract: Recently, LLMs have attracted significant attention for their exceptional performance across a broad range of tasks, particularly in text analysis. However, the finance sector presents a distinct challenge due to its dependence on time-series data for complex forecasting tasks. In this study, we introduce a novel framework called LLMFactor, which employs Sequential Knowledge-Guided Prompting (SKGP) to identify factors that influence stock movements using LLMs. Unlike previous methods that relied on keyphrases or sentiment analysis, this approach focuses on extracting factors more directly related to stock market dynamics, providing clear explanations for complex temporal changes. Our framework directs the LLMs to create background knowledge through a fill-in-the-blank strategy and then discerns potential factors affecting stock prices from related news. Guided by background knowledge and identified factors, we leverage historical stock prices in textual format to predict stock movement. An extensive evaluation of the LLMFactor framework across four benchmark datasets from both the U.S. and Chinese stock markets demonstrates its superiority over existing state-of-the-art methods and its effectiveness in financial time-series forecasting.

The paper "LLMFactor: Extracting Profitable Factors through Prompts for Explainable Stock Movement Prediction" presents an innovative approach to stock movement prediction using LLMs. This research addresses the challenge of applying LLMs to financial forecasting, a field that typically relies on time-series data rather than pure text analysis.

The authors introduce a framework named LLMFactor, which utilizes a technique called Sequential Knowledge-Guided Prompting (SKGP). This method focuses on identifying factors that influence stock movements, aiming for a deeper understanding of the market dynamics rather than relying solely on traditional sentiment analysis or keyphrase extraction.

Key elements of the LLMFactor framework include:

  1. Sequential Knowledge-Guided Prompting (SKGP): This technique involves creating structured prompts that guide LLMs to generate relevant background knowledge through a fill-in-the-blank strategy. This process helps in discerning potential factors that influence stock prices based on related news.
  2. Factor Extraction: Unlike traditional methods, this framework directly targets the extraction of market-relevant factors, offering explanations for temporal changes in stock prices. It enhances the interpretability of the predictions by correlating extracted factors with stock movements.
  3. Integration of Historical Data: The framework uses historical stock prices in textual format, integrating them with the identified factors to predict future stock movements. This incorporation of time-series data into LLM analysis represents a novel approach in financial forecasting.

The paper evaluates the effectiveness of the LLMFactor framework on four benchmark datasets from the U.S. and Chinese stock markets. The results show that LLMFactor outperforms existing cutting-edge methods in financial time-series forecasting, highlighting its potential to improve predictive accuracy and clarity in stock market predictions.

Overall, this research contributes to the growing field of financial prediction by leveraging the advanced capabilities of LLMs and introducing a method that enhances both the accuracy and explainability of stock movement predictions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Meiyun Wang (12 papers)
  2. Kiyoshi Izumi (16 papers)
  3. Hiroki Sakaji (21 papers)
Citations (3)