Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks (2404.02464v1)
Abstract: Many students in introductory programming courses fare poorly in the code writing tasks of the final summative assessment. Such tasks are designed to assess whether novices have developed the analytical skills to translate from the given problem domain to coding. In the past researchers have used instruments such as code-explain and found that the extent of cognitive depth reached in these tasks correlated well with code writing ability. However, the need for manual marking and personalized interviews used for identifying cognitive difficulties limited the study to a small group of stragglers. To extend this work to larger groups, we have devised several question types with varying cognitive demands collectively called Algorithmic Reasoning Tasks (ARTs), which do not require manual marking. These tasks require levels of reasoning which can define a learning trajectory. This paper describes these instruments and the machine learning models used for validating them. We have used the data collected in an introductory programming course in the penultimate week of the semester which required attempting ART type instruments and code writing. Our preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory and early prediction of code-writing skills.
- Multiple-choice questions in programming courses: Can we use them and are students motivated by them? TOCE, 19(1):1–16.
- Prediction of students’ performance in e-learning environment using random forest. IJIC, 7(2).
- Gender-based perspectives of elearning systems: An empirical study of social sustainability. In 27th International Conference on Information Systems Development (ISD).
- Failure rates in introductory programming: 12 years later. ACM inroads, 10(2):30–36.
- A system for evaluating learning outcomes: The solo taxonomy.
- Predicting student performance in an embodied learning environment. In MIUCC, pages 1–7. IEEE.
- Experience Report: Thinkathon–Countering an ”I Got It Working” mentality with pencil-and-paper exercises. In ITiCSE, pages 203–209.
- Evaluating a new exam question: Parsons problems. In ICER, pages 113–124.
- Using collaborative learning scenarios to teach programming to non-cs majors. CAEE, 25(5):719–731.
- Predictors of success in a first programming course. In ACE, pages 189–196.
- How would you like to be evaluated? the correlates of students’ preferences for assessment methods. Personality and Individual Differences, 50(2):259–263.
- Distractors in parsons problems decrease learning efficiency for young novice programmers. In ICER, pages 241–250.
- Fostering program comprehension in novice programmers-learning activities and learning trajectories. In ITiCSE-WGR, pages 27–52.
- How well do multiple choice tests evaluate student understanding in computer programming classes? ISE, 14(4):389.
- Evaluation of data mining techniques for predicting student’s performance. MECS, 9(8):25.
- Further evidence of a relationship between explaining, tracing and writing skills in introductory programming. ACM SIGCSE Bulletin, 41(3):161–165.
- Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. ACM SIGCSE Bulletin, 38(3):118–122.
- Relationships between reading, tracing and writing skills in introductory programming. In ICER, pages 101–112.
- Maaliw, R. R. (2021). Early prediction of electronics engineering licensure examination performance using random forest. In AIIoT, pages 41–47. IEEE.
- Learning problem solving skills: Comparison of e-learning and m-learning in an introductory programming course. EIT, 24(5):2779–2796.
- A multi-national, multi-institutional study of assessment of programming skills of first-year cs students. In ITiCSE-WGR, pages 125–180.
- Predicting student performance in statewide high-stakes tests for middle school mathematics using the results from third party testing instruments. Journal of Education and Learning, 3(3):135–143.
- Predicting student performance using data mining and learning analytics techniques. Applied Sciences, 11(1):237.
- What are we doing when we assess programming? In ACE, volume 27, page 30.
- A survey of literature on the teaching of introductory programming. Working group reports on ITiCSE on Innovation and technology in computer science education, pages 204–223.
- Learning and teaching programming: A review and discussion. Computer science education, 13(2):137–172.
- Centralized student performance prediction in large courses based on low-cost variables in an institutional context. IHE, 37:76–89.
- Sheard, J. (2012). Exams in computer programming: What do they examine and how complex are they? In 23rd Annual Conference of the Australasian Association for Engineering Education, pages 283–291. Engineers Australia.
- More time or better tools? Transactions on Education, 59(4):274–281.
- Enhancing learning experience by collaborative industrial projects. In International Conference on Engineering Education and Research (ICEER), pages 1–8. Western Sydney University.
- Introductory programming: examining the exams. In ACE, pages 61–70.
- Spichkova, M. (2019). Industry-oriented project-based learning of software engineering. In ICECCS, pages 51–60. IEEE.
- Spichkova, M. (2022). Teaching and learning requirements engineering concepts: Peer-review skills vs. problem solving skills. In RE, pages 316–322. IEEE.
- Autonomous systems research embedded in teaching. In Intelligent Interactive Multimedia Systems and Services 2017 10, pages 268–277. Springer.
- A visual logical language for system modelling in combinatorial test design. In CAiSE, pages 116–121. Springer.
- A qualitative think aloud study of the early neo-piagetian stages of reasoning in novice programmers. In ACE, pages 87–95. ACS.
- Combining agile practices with incremental visual tasks. In ACE, pages 103–112.
- A theory of instruction for introductory programming skills. CSE, 29(2-3):205–253.
- Project-based learning within ehealth, bioengineering and biomedical engineering application areas. Procedia Computer Science, 192:4952–4961.
- Shruthi Ravikumar (1 paper)
- Margaret Hamilton (4 papers)
- Charles Thevathayan (1 paper)
- Maria Spichkova (53 papers)
- Kashif Ali (2 papers)
- Gayan Wijesinghe (1 paper)