Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 147 tok/s
Gemini 2.5 Pro 40 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 90 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 424 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

An Efficient deep learning model to Predict Stock Price Movement Based on Limit Order Book (2505.22678v1)

Published 14 May 2025 in q-fin.TR and cs.LG

Abstract: In high-frequency trading (HFT), leveraging limit order books (LOB) to model stock price movements is crucial for achieving profitable outcomes. However, this task is challenging due to the high-dimensional and volatile nature of the original data. Even recent deep learning models often struggle to capture price movement patterns effectively, particularly without well-designed features. We observed that raw LOB data exhibits inherent symmetry between the ask and bid sides, and the bid-ask differences demonstrate greater stability and lower complexity compared to the original data. Building on this insight, we propose a novel approach in which leverages the Siamese architecture to enhance the performance of existing deep learning models. The core idea involves processing the ask and bid sides separately using the same module with shared parameters. We applied our Siamese-based methods to several widely used strong baselines and validated their effectiveness using data from 14 military industry stocks in the Chinese A-share market. Furthermore, we integrated multi-head attention (MHA) mechanisms with the Long Short-Term Memory (LSTM) module to investigate its role in modeling stock price movements. Our experiments used raw data and widely used Order Flow Imbalance (OFI) features as input with some strong baseline models. The results show that our method improves the performance of strong baselines in over 75$% of cases, excluding the Multi-Layer Perception (MLP) baseline, which performed poorly and is not considered practical. Furthermore, we found that Multi-Head Attention can enhance model performance, particularly over shorter forecasting horizons.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 1 like.

Upgrade to Pro to view all of the tweets about this paper: