Generative AI as a Safety Net for Survey Question Refinement (2509.08702v1)
Abstract: Writing survey questions that easily and accurately convey their intent to a variety of respondents is a demanding and high-stakes task. Despite the extensive literature on best practices, the number of considerations to keep in mind is vast and even small errors can render collected data unusable for its intended purpose. The process of drafting initial questions, checking for known sources of error, and developing solutions to those problems requires considerable time, expertise, and financial resources. Given the rising costs of survey implementation and the critical role that polls play in media, policymaking, and research, it is vital that we utilize all available tools to protect the integrity of survey data and the financial investments made to obtain it. Since its launch in 2022, ChatGPT and other generative AI model platforms have been integrated into everyday life processes and workflows, particularly pertaining to text revision. While many researchers have begun exploring how generative AI may assist with questionnaire design, we have implemented a prompt experiment to systematically test what kind of feedback on survey questions an average ChatGPT user can expect. Results from our zero--shot prompt experiment, which randomized the version of ChatGPT and the persona given to the model, shows that generative AI is a valuable tool today, even for an average AI user, and suggests that AI will play an increasingly prominent role in the evolution of survey development best practices as precise tools are developed.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.