Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large Sequence Models for Sequential Decision-Making: A Survey (2306.13945v1)

Published 24 Jun 2023 in cs.LG, cs.AI, and cs.MA

Abstract: Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems. As this article has been accepted by the Frontiers of Computer Science, here is an early version, and the most up-to-date version can be found at https://journal.hep.com.cn/fcs/EN/10.1007/s11704-023-2689-5

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Muning Wen (20 papers)
  2. Runji Lin (18 papers)
  3. Hanjing Wang (10 papers)
  4. Yaodong Yang (169 papers)
  5. Ying Wen (75 papers)
  6. Luo Mai (22 papers)
  7. Jun Wang (992 papers)
  8. Haifeng Zhang (59 papers)
  9. Weinan Zhang (322 papers)
Citations (26)

Summary

We haven't generated a summary for this paper yet.