Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Level Generation Through Large Language Models (2302.05817v2)

Published 11 Feb 2023 in cs.AI, cs.CL, and cs.NE

Abstract: LLMs are powerful tools, capable of leveraging their training on natural language to write stories, generate code, and answer questions. But can they generate functional video game levels? Game levels, with their complex functional constraints and spatial relationships in more than one dimension, are very different from the kinds of data an LLM typically sees during training. Datasets of game levels are also hard to come by, potentially taxing the abilities of these data-hungry models. We investigate the use of LLMs to generate levels for the game Sokoban, finding that LLMs are indeed capable of doing so, and that their performance scales dramatically with dataset size. We also perform preliminary experiments on controlling LLM level generators and discuss promising areas for future work.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Graham Todd (9 papers)
  2. Sam Earle (25 papers)
  3. Muhammad Umair Nasir (7 papers)
  4. Michael Cerny Green (30 papers)
  5. Julian Togelius (154 papers)
Citations (55)

Summary

We haven't generated a summary for this paper yet.