Logical GANs: Adversarial Learning through Ehrenfeucht Fraisse Games (2510.22824v1)
Abstract: GANs promise indistinguishability, logic explains it. We put the two on a budget: a discriminator that can only ``see'' up to a logical depth $k$, and a generator that must look correct to that bounded observer. \textbf{LOGAN} (LOGical GANs) casts the discriminator as a depth-$k$ Ehrenfeucht--Fra\"iss\'e (EF) \emph{Opponent} that searches for small, legible faults (odd cycles, nonplanar crossings, directed bridges), while the generator plays \emph{Builder}, producing samples that admit a $k$-round matching to a target theory $T$. We ship a minimal toolkit -- an EF-probe simulator and MSO-style graph checkers -- and four experiments including real neural GAN training with PyTorch. Beyond verification, we score samples with a \emph{logical loss} that mixes budgeted EF round-resilience with cheap certificate terms, enabling a practical curriculum on depth. Framework validation demonstrates $92\%$--$98\%$ property satisfaction via simulation (Exp.~3), while real neural GAN training achieves $5\%$--$14\%$ improvements on challenging properties and $98\%$ satisfaction on connectivity (matching simulation) through adversarial learning (Exp.~4). LOGAN is a compact, reproducible path toward logic-bounded generation with interpretable failures, proven effectiveness (both simulated and real training), and dials for control.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.