Search code examples
r-exams

r-exams generates invalid QTI for cloze exercise


I am running into a weird situation with r-exams where it "randomly" generates valid and an invalid QTI 2.1 files for the same exercise. I have created a MRE with two almost identical Rmd files here. In a nutshell, the only difference between the two exercises are the actual solution strings in exsolution.

If I look at the generated XML files, one of them contains -Inf as the minimum score at various places, whereas the other has 0 (which I think is the correct behavior).

repro_140893190_section_1_item_1_cloze.xml

[...]
<outcomeDeclaration identifier="MINSCORE" cardinality="single" baseType="float">
<defaultValue>
<value baseType="float">0</value>
</defaultValue>
</outcomeDeclaration>
[...]

repro_140893190_section_2_item_1_cloze.xml

[...]
<outcomeDeclaration identifier="MINSCORE" cardinality="single" baseType="float">
<defaultValue>
<value baseType="float">-Inf</value>
</defaultValue>
</outcomeDeclaration>
[...]

For reference, the full XML files are here. -Inf seems to be invalid and will cause OLAT's QTI interpreter to skip the exercise. I have no real good explanation for the difference. r-exams does not print any errors and r-exams's stress test works fine. Changing the order of exercises does not change the outcome, nor does including only one of them. I am using r-exams 2.4-0 from r-forge with R 4.0.3 on Windows 10. Is this a bug in r-exams, or somewhere else?


Solution

  • There were two issues with the evaluation strategy here and both are finally fixed now in version 2.4-0.

    1. The string item with solution 10 was misinterpreted as a schoice item with two options (true + false). And this lead to the Inf issue, which was already fixed previously (see comments above).

    2. Setting the negative points for string items within a cloze did not work correctly yet, but finally does. To get the desired evaluation strategy you need to specify:

      exams2openolat(..., eval = exams_eval(negative = TRUE), cloze = list(minvalue = 0))
      

      First, this generally enables negative points which, by default, means that an incorrect string (or num or schoice) answer yield a substraction of the same amount of points that the correct answer would have added. Second, for cloze questions the minimum number of points is set to zero, i.e., when summing up the points for all sub-items the worst case is no points.