Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformer with Tree-order Encoding for Neural Program Generation (2206.13354v1)

Published 30 May 2022 in cs.CL and cs.AI

Abstract: While a considerable amount of semantic parsing approaches have employed RNN architectures for code generation tasks, there have been only few attempts to investigate the applicability of Transformers for this task. Including hierarchical information of the underlying programming language syntax has proven to be effective for code generation. Since the positional encoding of the Transformer can only represent positions in a flat sequence, we have extended the encoding scheme to allow the attention mechanism to also attend over hierarchical positions in the input. Furthermore, we have realized a decoder based on a restrictive grammar graph model to improve the generation accuracy and ensure the well-formedness of the generated code. While we did not surpass the state of the art, our findings suggest that employing a tree-based positional encoding in combination with a shared natural-language subword vocabulary improves generation performance over sequential positional encodings.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Klaudia-Doris Thellmann (1 paper)
  2. Bernhard Stadler (2 papers)
  3. Ricardo Usbeck (36 papers)
  4. Jens Lehmann (80 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.