ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

A Bug's New Life: Creating Refute Questions from Filtered CS1 Student Code Snapshots

Agarwal, N and Kumar, V and Raman, A and Karkare, A (2023) A Bug's New Life: Creating Refute Questions from Filtered CS1 Student Code Snapshots. In: UNSPECIFIED, pp. 7-14.

[img] PDF
Pro_Acm_Con_Glo_Com_Edu_1_2023.pdf - Published Version
Restricted to Registered users only

Download (1MB)
Official URL: https://doi.org/10.1145/3576882.3617916

Abstract

In an introductory programming (CS1) context, a Refute question asks students for a counter-example which proves that a given code fragment is an incorrect solution for a given task. Such a question can be used as an assessment item to (formatively) develop or (summatively) demonstrate a student's abilities to comprehend the task and the code well enough to recognize a mismatch. These abilities assume greater significance with the emergence of generative AI technologies capable of writing code that is plausible (at least to novice programmers) but not always correct. Instructors must address three concerns while designing an effective Refute question, each influenced by their specific teaching-learning context: (1) Is the task comprehensible? (2) Is the incorrect code a plausible solution for the task? (3) Is the complexity of finding a counter-example acceptable? While the first concern can often be addressed by reusing tasks from previous code writing questions, addressing the latter concerns may require substantial instructor effort. We therefore investigate whether concerns (2) and (3) can be addressed by buggy student solutions for the corresponding code writing question from a previous course offering. For 6 code writing questions (from a Fall 2015 C programming course), our automated evaluation system logged 13,847 snapshots of executable student code, of which 10,574 were buggy (i.e., they failed at least one instructor-supplied test case). Code selected randomly from this pool rarely addresses these concerns, and manual selection is infeasible. Our paper makes three contributions. First, we propose an automated mechanism to filter this pool to a more manageable number of snapshots from which appropriate code can be selected manually. Second, we evaluate our semi-automated mechanism with respect to concerns (2) and (3) by surveying a diverse set of 56 experienced participants (instructors, tutors, and teaching assistants). Third, we use this mechanism to seed a public repository of Refute questions and provide a template to create additional questions using a public resource (CodeCheck). © 2023 ACM.

Item Type: Conference Paper
Publication: CompEd 2023 - Proceedings of the ACM Conference on Global Computing Education
Publisher: Association for Computing Machinery, Inc
Additional Information: The copyright for this article belongs to author
Keywords: Automation; C (programming language); Curricula, AI Technologies; Assessment; Code fragments; Code-writing; Counter examples; Introductory programming; Novice programmer; Refute question; Teaching-learning; Writing codes, Students
Department/Centre: Others
Date Deposited: 01 Mar 2024 09:54
Last Modified: 01 Mar 2024 09:54
URI: https://eprints.iisc.ac.in/id/eprint/84016

Actions (login required)

View Item View Item