Week 1
The first week was all about getting acclimated to the project details, including the process of building the autograders, learning how to read research papers, and starting the online research course. I first spent some time learning about pytest, the python testing library, as that is the foundation from which all the autograders are built. I also read through the documentation that already exists for writing autograders in this project and learned how the Coursera environment works as a development tool. From there, I was able to start working with my peers on our first open-ended autograder for an assignment about user input and randomness, and I began by writing out a significant number of incorrect student solutions that could be used to test the validity of the autograder itself. I ended the week working on writing the different test cases for each buggy solution. Side by side with this autograder work, I read some papers on how to actually read a research paper and then read a couple of research papers about autograders and open-ended assignments for introductory CS classes to be able to get a better understanding of the context behind this research (the papers are highlighted below). The research course I am taking also began with introductory lessons about research culture and working collaboratively in a research team.
Papers Read:
“Impact of Open-Ended Assignments on Student Self-Efficacy in CS1” by Sadia Sharmin, Daniel Zingaro, Lisa Zhang, and Clare Brett.
The paper presents a study that introduces a new type of open-ended programming assignment for an introductory cs course and then analyzes how this assignment compares to the more typical rigid assignment in terms of impacts on self-efficacy. The study was conducted by splitting the students into two sections based on the type of assignment and surveying them pre- and post- assignment. The results indiciated that the open-ended assignment caused no negative impacts compared to a regular assignments, but there was no evidence of a positive impact.
“Providing Meaningful Feedback for Autograding of Programming Assignments” by Georgiana Haldeman, Andrew Tjang, Monica Babeş-Vroman, Stephen Bartos, Jay Shah, Danielle Yucht, and Thu D. Nguyen.
This paper introduces a new framework that aims to make autograders provide more constructive feedback to users when evaluating their programs. The framework identifies common student errors and then uses pre-generated hints to give the user helpful feedback based on what kind of error they made.