Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Generative Power of Omega-Grammars and Omega-Automata (1308.4516v1)

Published 21 Aug 2013 in cs.FL and cs.LO

Abstract: An \omega-grammar is a formal grammar used to generate \omega-words (i.e. infinite length words), while an \omega-automaton is an automaton used to recognize \omega-words. This paper gives clean and uniform definitions for \omega-grammars and \omega-automata, provides a systematic study of the generative power of \omega-grammars with respect to \omega-automata, and presents a complete set of results for various types of \omega-grammars and acceptance modes. We use the tuple (\sigma,\rho,\pi) to denote various acceptance modes, where \sigma denotes that some designated elements should appear at least once or infinitely often, \rho denotes some binary relation between two sets, and \pi denotes normal or leftmost derivations. Technically, we propose (\sigma,\rho,\pi)-accepting \omega-grammars, and systematically study their relative generative power with respect to (\sigma,\rho)-accepting \omega-automata. We show how to construct some special forms of \omega-grammars, such as \epsilon-production-free \omega-grammars. We study the equivalence or inclusion relations between \omega$-grammars and \omega-automata by establishing the translation techniques. In particular, we show that, for some acceptance modes, the generative power of \omega-CFG is strictly weaker than \omega-PDA, and the generative power of \omega-CSG is equal to \omega-TM (rather than linear-bounded \omega-automata-like devices). Furthermore, we raise some remaining open problems for two of the acceptance modes.

Citations (6)

Summary

We haven't generated a summary for this paper yet.