Languages of Words of Low Automatic Complexity Are Hard to Compute (2510.07696v1)
Abstract: The automatic complexity of a finite word (string) is an analogue for finite automata of Sipser's distinguishing complexity (1983) and was introduced by Shallit and Wang (2001). For a finite alphabet $\Sigma$ of at least two elements, we consider the non-deterministic automatic complexity given by exactly - yet not necessarily uniquely - accepting automata: a word $x \in \Sigma*$ has exact non-deterministic automatic complexity $k \in \mathbb{N}$ if there exists a non-deterministic automaton of $k$ states which accepts $x$ while rejecting every other word of the same length as $x$, and no automaton of fewer states has this property. Importantly, and in contrast to the classical notion, the witnessing automaton may have multiple paths of computation accepting $x$. We denote this measure of complexity by $A_{Ne}$, and study a class of languages of low $A_{Ne}$-complexity defined as $L_q = { \, x \in \Sigma* : A_{Ne}(x) < q|x| \, }$, which is parameterised by rationals $q \in (0,1/2)$ (generalising a class of sets first studied by Kjos-Hanssen). We show that for every $q \in (0,1/2)$, this class is neither context-free nor recognisable by certain Boolean circuits. In the process, we answer an open question of Kjos-Hanssen quantifying the complexity of $L_{1/3}$ in terms of Boolean circuits, and also prove the Shannon effect for $A_{Ne}$.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.