Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NLP Reproducibility For All: Understanding Experiences of Beginners (2305.16579v3)

Published 26 May 2023 in cs.CL and cs.AI

Abstract: As NLP has recently seen an unprecedented level of excitement, and more people are eager to enter the field, it is unclear whether current research reproducibility efforts are sufficient for this group of beginners to apply the latest developments. To understand their needs, we conducted a study with 93 students in an introductory NLP course, where students reproduced the results of recent NLP papers. Surprisingly, we find that their programming skill and comprehension of research papers have a limited impact on their effort spent completing the exercise. Instead, we find accessibility efforts by research authors to be the key to success, including complete documentation, better coding practice, and easier access to data files. Going forward, we recommend that NLP researchers pay close attention to these simple aspects of open-sourcing their work, and use insights from beginners' feedback to provide actionable ideas on how to better support them.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Shane Storks (14 papers)
  2. Keunwoo Peter Yu (9 papers)
  3. Ziqiao Ma (23 papers)
  4. Joyce Chai (52 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.