Skip to content

Work division architecture

Pieter van den Hombergh edited this page Nov 23, 2013 · 4 revisions

We start this project with a set of teachers (developer, plugin users), a group of students and netbeans consultant/guru Geertjan. The total workshop membercount is approx 30 persons.

To make all work effectively, we start with chopping up the problem into parts that can be tangled by subgroups. Each subgroup should be able to deliver some kind of library or component that can be integrated into the whole.

Assumptions: The exam comprises of one netbeans project or equivalent. The exam itself has in id, like PRO1-20131128 (Java 1 on October 28 2013)

The Functional groupings are:

  • Source code preparation. Editor extension. In the Java editor you should be able to mark parts of existing code as 'solution' which will be a task to do for the students. These tasks should be identified with a unique id. One may assume that is sufficient that these ids are unique within the exam project. To see the tasks or some meaningful annotation in the left- or right-hand gutter could be helpful.

  • Exam preparation. Adding creating coordinates The student's work will be available as checked out projects in a sandbox. Every student project will have the same (folder) structure and file names. The sandboxes are in one folder, containing the projects. The top most folder per project will have a unique name, possibly the same as the Netbeans-IDE project name. The student identifier may be encode into that name.

  • Candidate identification The files may be annotated with student id's (current practice) but the only requirement is that the files can be visited per task-candidate-coordinates. In an ideal situation, the candidate id's should be opaque to the the corrector, so that the corrector can work unbiased. In such case the student ID could for instance be hashed and only be visible in that form to the corrector. It could even make correction by the students possible. Inserting a few fake candidates to check the student correctors would reduce the chance of fraud in such case.

  • Exam correction Which is the only thing we do in the web based cwb. Here the corrector/examiner must be able to browse through the exam project in task, student order. That is, correct one task same task for all students then the next. He must be able to record the grade for the task and maybe add remarks/reminders in. At all time the code of the students should not be changed. The corrector must be able to see a sample solution nearby, below or above the student work.

  • Task Grade collection The grades that are given must be persisted. It must be possible to start, stop and resume a correction session (correctors need breaks too). To enable sharing the correction burden it must be possible for correctors to share the correction data, preferably live. In that way one corrector could do the odd numbered tasks and the other on the evne number (as an example, practical division of work depends on the exam type). In this session every task is given a grade on a scale from min to max (typically, in our practice from 1 to 10, with 0 the default value, meaning a 1 means a corrected task of no real value, but the candidate attended).

  • Grade consolidation Most exams have multiple tasks, each with a potentially different value. This can be modelled as a weight. To compute the final grade, the sum of the weight times grades is computed, divided by the sum of the weights. (This is also known as the inner product of the weight and task divided by the weigth-sum). A local database (derby) is the minimum, but a rest-full service, properly secured and running on a server would be better. Of course that server could be localhost. Maybe use some jee tricks here on a lightweight jee container.

  • A progress indicator would be nice.

Clone this wiki locally