Predicting the Understandability of Computational Notebooks through Code Metrics Analysis
Abstract: Computational notebooks are the primary coding tools for data scientists, but their code quality remains understudied and often poor. Given the importance of maintainability and reusability, enhancing code understandability is essential. Traditional methods for assessing understandability typically rely on limited questionnaires or metadata like likes and votes, which may not reflect actual code clarity. To address this, we propose a novel approach that leverages user opinions from software repositories to assess the understandability of Jupyter notebooks. We conducted a case study using 542,051 Kaggle Jupyter notebooks compiled in the DistilKaggle dataset. To identify user comments related to code understandability, we used a fine-tuned DistilBERT transformer. We then introduced a new metric, i.e., User Opinion Code Understandability (UOCU), based on the number of relevant comments, their upvotes, and notebook views. UOCU proved significantly more effective than prior methods. We further enhanced it by combining UOCU with total upvotes in a hybrid approach. Using this improved metric, we collected 34 notebook-level metrics from 132,723 final notebooks and trained machine learning models to predict understandability. Our best model, a Random Forest classifier, achieved 89% accuracy in classifying the understandability level of notebook code. This work demonstrates the value of user opinion signals and notebook metrics in building scalable, accurate measures of code understandability.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.