Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning Directly from Grammar Compressed Text (2002.12570v1)

Published 28 Feb 2020 in stat.ML, cs.CL, and cs.LG

Abstract: Neural networks using numerous text data have been successfully applied to a variety of tasks. While massive text data is usually compressed using techniques such as grammar compression, almost all of the previous machine learning methods assume already decompressed sequence data as their input. In this paper, we propose a method to directly apply neural sequence models to text data compressed with grammar compression algorithms without decompression. To encode the unique symbols that appear in compression rules, we introduce composer modules to incrementally encode the symbols into vector representations. Through experiments on real datasets, we empirically showed that the proposal model can achieve both memory and computational efficiency while maintaining moderate performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Yoichi Sasaki (1 paper)
  2. Kosuke Akimoto (6 papers)
  3. Takanori Maehara (44 papers)

Summary

We haven't generated a summary for this paper yet.