diff --git a/book.org b/book.org index 169757f..c1dbfb1 100644 --- a/book.org +++ b/book.org @@ -651,6 +651,9 @@ Finally, Chapter\nbsp{}[[#chap:discussion]] concludes the dissertation with some :END: In this chapter, we will give an overview of Dodona's most important features. +This chapter answers the question what features a platform like Dodona needs. +The most important feature is automated assessment, but as we show in this chapter, a lot more features than that are needed. + This chapter is partially based on *Van Petegem, C.*, Maertens, R., Strijbol, N., Van Renterghem, J., Van der Jeugt, F., De Wever, B., Dawyndt, P., Mesuere, B., 2023. Dodona: Learn to code with a virtual co-teacher that supports active learning. /SoftwareX/ 24, 101578. https://doi.org/10.1016/j.softx.2023.101578 ** User management @@ -929,6 +932,7 @@ Anonymous mode is not restricted to the context of assessment and can be used ac When reviewing a selected submission from a student, assessors have direct access to the feedback that was previously generated during automated assessment: source code annotations in the "Code" tab and other structured and unstructured feedback in the remaining tabs. Moreover, next to the feedback that was made available to the student, the specification of the assignment may also add feedback generated by the judge that is only visible to the assessor. Assessors might then complement the assessment made by the judge by adding *source code annotations* as formative feedback and by *grading* the evaluative criteria in a scoring rubric as summative feedback (Figure\nbsp{}[[fig:whatannotations]]). + Previous annotations can be reused to speed up the code review process, because remarks or suggestions tend to recur frequently when reviewing submissions for the same assignment. Grading requires setting up a specific *scoring rubric* for each assignment in the evaluation, as a guidance for evaluating the quality of submissions\nbsp{}[cite:@dawsonAssessmentRubricsClearer2017; @pophamWhatWrongWhat1997]. The evaluation tracks which submissions have been manually assessed, so that analytics about the assessment progress can be displayed and to allow multiple assessors working simultaneously on the same evaluation, for example one (part of a) programming assignment each. @@ -937,6 +941,16 @@ The evaluation tracks which submissions have been manually assessed, so that ana #+NAME: fig:whatannotations [[./images/whatannotations.png]] +** Conclusion +:PROPERTIES: +:CREATED: [2024-05-13 Mon 07:22] +:END: + +As we have shown in this chapter, a platform like Dodona needs a lot more features than just automated assessment and feedback. +Features like course management and user management allow teachers to manage their students, while infrastructure around exercises such as repositories and our judges are required to allow them to easily add exercises. +Additional features like Q&A, code reviews, and evaluations make sure that teachers can interact with their students, while having the context they are talking about near their interactions. +Creating a platform like Dodona entails a lot of work to get these things right. + * Dodona in educational practice :PROPERTIES: :CREATED: [2023-10-23 Mon 08:48]