"Medium-n studies" in computing education conferences (2311.14679v2)
Abstract: Good (Frequentist) statistical practice requires that statistical tests be performed in order to determine if the phenomenon being observed could plausibly occur by chance if the null hypothesis is false. Good practice also requires that a test is not performed if the study is underpowered: if the number of observations is not sufficiently large to be able to reliably detect the effect one hypothesizes, even if the effect exists. Running underpowered studies runs the risk of false negative results. This creates tension in the guidelines and expectations for computer science education conferences: while things are clear for studies with a large number of observations, researchers should in fact not compute p-values and perform statistical tests if the number of observations is too small. The issue is particularly live in CSed venues, since class sizes where those issues are salient are common. We outline the considerations for when to compute and when not to compute p-values in different settings encountered by computer science education researchers. We survey the author and reviewer guidelines in different computer science education conferences (ICER, SIGCSE TS, ITiCSE, EAAI, CompEd, Koli Calling). We present summary data and make several preliminary observations about reviewer guidelines: guidelines vary from conference to conference; guidelines allow for qualitative studies, and, in some cases, experience reports, but guidelines do not generally explicitly indicate that a paper should have at least one of (1) an appropriately-powered statistical analysis or (2) rich qualitative descriptions. We present preliminary ideas for addressing the tension in the guidelines between small-n and large-n studies
- Replication in computing education research: researcher attitudes and experiences. In Proceedings of the 16th Koli Calling International Conference on Computing Education Research. 2–11.
- Andrew Gelman and John Carlin. 2014. Beyond power calculations: Assessing type S (sign) and type M (magnitude) errors. Perspectives on Psychological Science 9, 6 (2014), 641–651.
- John P. A. Ioannidis. 2005. Why most published research findings are false. PLoS Medicine 2, 8 (2005).
- Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science 1, 4 (2018), 443–490.
- Cathleen O’Grady. 2020. Ecologists push for more reliable research. Science 370, 6522 (2020), 1260–1261.
- Paul Ralph. 2021. ACM SIGSOFT empirical standards released. ACM SIGSOFT Software Engineering Notes 46, 1 (2021), 19–19.
- False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science 22, 11 (2011), 1359–1366.
- Tal Yarkoni. 2022. The generalizability crisis. Behavioral and Brain Sciences 45 (2022), e1.