Grammar compression with probabilistic context-free grammar (2003.08097v1)
Abstract: We propose a new approach for universal lossless text compression, based on grammar compression. In the literature, a target string $T$ has been compressed as a context-free grammar $G$ in Chomsky normal form satisfying $L(G) = {T}$. Such a grammar is often called a \emph{straight-line program} (SLP). In this paper, we consider a probabilistic grammar $G$ that generates $T$, but not necessarily as a unique element of $L(G)$. In order to recover the original text $T$ unambiguously, we keep both the grammar $G$ and the derivation tree of $T$ from the start symbol in $G$, in compressed form. We show some simple evidence that our proposal is indeed more efficient than SLPs for certain texts, both from theoretical and practical points of view.
- Hiroaki Naganuma (1 paper)
- Diptarama Hendrian (22 papers)
- Ryo Yoshinaka (28 papers)
- Ayumi Shinohara (28 papers)
- Naoki Kobayashi (31 papers)