Search:
BN /

# Mixnum

### Mixed number subtraction

This Method B from Kumi Tatsuoka's classic mixed-number subtraction tests, one of the most well-known cognitively diagnostic assessments. Invalid BibTex Entry!. Bob Mislevy translated Tatsuoka's rule space model into a Bayesian network Invalid BibTex Entry!.

## Purpose

Klein et al. Invalid BibTex Entry! performed a cognitive task analysis of how elementary students approached mixed-number subtraction. The identified two methods:

Method A
Convert the mixed numbers to improper fractions, subtract and simplify.
Method B
Subtract the whole and fractional parts separately, borrowing from the whole number if necessary.

Tatsuoka Invalid BibTex Entry! applied the Rule Space? method to a sample of students taking a test designed after the cognitive model of Klein et al. She was able to separate out the students using Method A from those using Method B, and identify which of the component skills of each of those methods the students had mastered. Since then, the data has be reanalyzed with a number of other Cognitively Diagnostic Models?.

Mislevy Invalid BibTex Entry! translated the model for the Method B assessment to a Bayesian network, in part as an illustration of Bayesian network. At the same time he dropped one of the skills (finding a common denominator). The test analysed here uses only 15 of the original 40 items.

## Proficiency Models

Based on the Klein et al. analysis, Method B requires the following skills:

1. Basic Fraction Subtraction
2. Simplify/reduce fraction or mixed number
3. Separate whole number from fraction
4. Borrow one from the whole number in a given mixed number.
5. Convert a whole number to a fraction

Note that Skill 3 is a prerequisite of Skill 4. Consequently, Mislevy introduced a node Skill WN to model the relationship. This version also adds three other intermediate nodes with logical distributions, primarily to make constructing the CPTs for various evidence models easier. The nodes are s12 and s125.

A filled in version of the proficiency model was created by adding dummy nodes for the combinations of skills used in various evidence models. This version can be found in:

## Task Models

The tasks are all mixed number subtraction tasks with two parts, the minuend and the subtrahend. Both the minuend and subtrahend may be mixed numbers, with both a whole and fractional part. The whole number and numerator and denominator of the fractional part are positive integers: algebraic expressions are not allowed. At least one of the minuend or the subtrahend must have a fractional part. The test was administered with pencil and paper and the work product was also a mixed number.

As the tasks were designed they were coded with what operations (i.e., skills described above) were required to solve them using both Method A and Method B. This coding was recorded in a Q-matrix?. (Note that the original version of this assessment had a Q-matrix? for both Method A and Method B.) The tasks and Q-matrix for the 15 item subtest used in this example are given below.

## Evidence Models

The term Evidence Model? was not either in the original construction of the test (Tatsuoka, 1984) or when it was converted to a Bayesian network (Mislevy, 1994). However, inspecting the Q-matrix below reveals six unique patterns, which are the evidence models. The column EM in that table indicates which evidence model the task is from.

The rules of evidence? for these tasks all were to compare the given solution with the key. The observable? outcome variable was a dichotomous indicator whether the outcome was correct or incorrect. (Note that the tasks which required reducing the result to a simpler form were eliminated, so in this version no rule is needed about equivalent representations.)

The statistical form of the evidence model, as described in Mislevy (1994), were deterministic-input noisy-and (DINA?) models. Each evidence model has a set of skills required for solving tasks of the appropriate type (given in the Q-matrix?). Examinees who possess all of these skills should be able to solve the task with probability 1-sj, where sj is the slipping? probability for Task j. Examinees who lack one or more of the required skills will get the item writing the probability gj, where gj is the guessing? probability for Task j.

Note that the xs in the Q-matrix below indicate which proficiency variables are parents of the observable outcome variable. Each row of the conditional probability table? corresponds to one configuration of skills. The values in that row would be either 1-gj and gj or sj and 1-sj depending on whether that skill pattern had all of the required skills.

For scoring purposes, if the guessing and slip parameters are known, then it is simple to construct the conditional probability tables. Many rows will have the same values, but that is not a serious problem. When learning? the parameters from data, however, the values need to be constrained so that the rows which should have the same probabilities will have the same probabilities. The simplest way to do this is to introduce a deterministic node into the evidence model which combines all of the skills into a single skill pattern. This is then the input into the probabilistic node which represents the observable.

Sinarhay and Almond Invalid BibTex Entry! explored some alternative evidence models where the probability of success was proportional to the number of the required skills the examinee had mastered. (The filled-in proficiency model above is used for this model.) In particular, they found that the guess parameter was smaller for students who had mastered no skills than for students who had mastered at least one of the required skills.

## Assembly Model

The original test was represented with a Q-Matrix?. The original test ([bninea.bib,Tatsuoka1984]) had 40 items; the second 20 were exact isomorophs? of the first 20. Following Mislevy (1994), we used only the first 20 tasks, and dropped five that required finding a common denominator. The Q-matrix for the reduced test is given below:

Item Text 1 2 3 4 5 EM Skills Required 6 $\frac67 - \frac47$ x 1 8 $\frac34 - \frac34$ x 1 12 $\frac{11}8 - \frac18$ x x 2 14 $3\frac45 - 3\frac25$ x x 3 16 $4\frac57 - 1\frac47$ x x 3 9 $3\frac78 - 2$ x x 3 4 $3\frac12 - 2\frac32$ x x x 4 11 $4\frac13 - 2\frac43$ x x x 4 17 $7\frac35 - \frac45$ x x x 4 20 $4\frac13 - 1\frac53$ x x x 4 18 $4\frac1{10} - 2\frac8{10}$ x x x 4 15 $2 - \frac13$ x x x x 5 7 $3 - 2\frac15$ x x x x 5 19 $7-1\frac43$ x x x x 5 10 $4\frac4{12} - 2\frac7{12}$ x x x x 6

## Data Sets

Three versions of this data are available:

mixnum test
an old version of the mixed number subtraction which used Ergo? instead of Netica?.
mixnum fixed tables

a version using fixed tables, suitable for scoring, but not calibration.

mixnum

a version usable for calibration.

Tarballs containing all of the files are available from http://pluto.coe.fsu.edu/BNinEA/MixedNumberSubtraction.

We do not have permission to redistribute the data from the original Tatsuoka (1984) analysis, but the mixnum contains a random data set.

## References

Invalid BibTex Entry!

Invalid BibTex Entry!

Invalid BibTex Entry!

Invalid BibTex Entry!

Invalid BibTex Entry!

Invalid BibTex Entry!

Page last modified on November 27, 2014, at 01:23 PM