Troubleshooters for Tasks of Introductory Programming MOOCs
Learning programming has become more and more popular and organizing introductory massive open online courses (MOOCs) on programming can be one way to bring this education to the masses. While programming MOOCs usually use automated assessment to give feedback on the submitted code, the lack of understanding of certain aspects of the tasks and feedback given by the automated assessment system can be one persistent problem for many participants. This paper introduces troubleshooters, which are help systems, structured like decision trees, for giving hints and examples of certain aspects of the course tasks. The goal of this paper is to give an overview of usability (benefits and dangers) of, and the participants’ feedback on, using troubleshooters. Troubleshooters have been used from the year 2016 in two different programming MOOCs for adults in Estonia. These MOOCs are characterized by high completion rates (50–70%), which is unusual for MOOCs. Data is gathered from the learning analytics integrated into the troubleshooters’ environment, letters from the participants, questionnaires, and tasks conducted through the courses. As it was not compulsory to use troubleshooters, the results indicate that only 19.8% of the users did not use troubleshooters at all and 10% of the participants did not find troubleshooters helpful at all. The main difference that appeared is that the number of questions asked from the organizers about the programming tasks during the courses via helpdesk declined about 29%.
International Review of Research in Open and Distributed Learning