Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Reinforcement Learning for Playing 2.5D Fighting Games (1805.02070v1)

Published 5 May 2018 in cs.LG, cs.AI, and stat.ML

Abstract: Deep reinforcement learning has shown its success in game playing. However, 2.5D fighting games would be a challenging task to handle due to ambiguity in visual appearances like height or depth of the characters. Moreover, actions in such games typically involve particular sequential action orders, which also makes the network design very difficult. Based on the network of Asynchronous Advantage Actor-Critic (A3C), we create an OpenAI-gym-like gaming environment with the game of Little Fighter 2 (LF2), and present a novel A3C+ network for learning RL agents. The introduced model includes a Recurrent Info network, which utilizes game-related info features with recurrent layers to observe combo skills for fighting. In the experiments, we consider LF2 in different settings, which successfully demonstrates the use of our proposed model for learning 2.5D fighting games.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Yu-Jhe Li (23 papers)
  2. Hsin-Yu Chang (3 papers)
  3. Yu-Jing Lin (4 papers)
  4. Po-Wei Wu (3 papers)
  5. Yu-Chiang Frank Wang (88 papers)
Citations (5)

Summary

We haven't generated a summary for this paper yet.