The aMath assessment is a computer-adaptive test (CAT) of broad mathematics skills for students in grades K through 12. Item difficulty is determined by a student’s performance on prior items. This article answers question on the most recent algorithm update for aMath.
Updated aMath Item Difficulty Range
The computer adaptive testing (CAT) algorithm used for administering FastBridge’s aMath and aReading assessments was designed to maximize efficiency without compromising the reliability of overall ability estimation. For that purpose, the algorithm selects items that are best aligned to the student’s ability estimate based on the student’s performance on the items already completed. For example, if the student’s ability estimate after completing 10 aMath items was 210, the algorithm searched the item bank for the 10 items best matched to that ability and randomly selected one for administration. That process continues until the student completes up to 30 items or has attained a sufficiently precise score estimate.
What Has Changed?
In November 2022, a minor modification was made to the algorithm. With this modification, item selection for aMath is constrained to items that assess standards no higher than two grades above the student’s enrolled grade level. The purpose of this update is to better align item content to the content and skills to which the student has been exposed. The table below shows how the constraint applies to each enrolled grade level. Note that this update applies only to aMath and not aReading.
Grade Range of Available aMath Items for Each Enrolled Grade Level:
Enrolled Grade Level | Grade Range of Available Items |
KG | KG-2 |
1 | KG-3 |
2 | KG-4 |
3 | KG-5 |
4 | KG-6 |
5 | KG-7 |
6 | KG-8 |
7-12 | KG-12 |
Why Update the Algorithm?
Prior to this update, it was possible for a student to be administered an item assessing content and skills more than three grades above the student’s enrolled grade level. While this was a fairly rare occurrence, it could have the effect of confusing or frustrating the student and it increased the probability of the student failing the item, relative to other items of similar difficulty that were better aligned to the student’s grade level.
This situation occurs when a concept that is introduced in a given grade is quickly mastered by students in that grade, but that concept could not be inferred for students who have not been exposed to it. When this occurs, the item is relatively easy for students who have been exposed to the concept, but often very hard for students in a lower grade who have not been exposed to that concept. This phenomenon, known as an item difficulty by grade-level interaction, is a form of differential item function (DIF). That is, for two students of the same ability but in different grades, the student from the lower grade has a lower probability of answering the item correctly. This can bias the student’s ability estimate, and the effect becomes stronger the farther above the student’s grade level the concept is introduced.
Consider the following item:
What is the |-12|?
This item asks students for the absolute value of negative 12. For students who have learned the concept of absolute value, the item is very easy. But for students who have not been exposed to this concept, the absolute value symbol would lack meaning; as such, it would be likely that the student would get it wrong.
The +2 grade restriction reduces the likelihood that students will encounter items that assess concepts they have not been taught. In addition, this approach better utilizes items of similar difficulty that assess more familiar concepts.
How Will This Change Affect High-Ability Students?
This change will not affect the ability estimation of high-ability students because there is a sufficient amount of very difficult items within two grade levels of any given grade in the aMath item bank to accurately estimate the ability of students with scores at or above the 99th percentile.
To test this assertion, a series of simulated analyses were conducted. These analyses simulated the conditions of very high-ability students (i.e., with scores at or above the 99th percentile) taking the aMath test with the grade restriction in place. Specifically, the simulation generated ability scores from a uniform distribution in the range from the 99th percentile to the highest score achieved in a given grade. The adaptive algorithm operated as designed under the new grade restriction in which only aMath items from kindergarten up to two grade levels above the student’s enrolled grade were used.
The findings demonstrated that with the restriction in place, the available items were sufficient to provide accurate ability estimates for the extremely high-ability students. It is possible that a very small fraction of the highest-ability students in Grades 5 and 6 could answer all items correctly, but the ability estimation will still be accurate as the students will be taking items matched to their ability. That said, the very rare possibility of extremely high-ability students in these grades getting all the items correct is possible without the restriction.
Why Not Apply This Update to aReading?
As previously mentioned, the item difficulty by grade-level interaction is not a factor in reading. Unlike math, reading develops through stages, and development within each stage is rather gradual. Some math concepts and skills also show gradual development, while others, like absolute value, develop abruptly and quickly become rote. It is the distinctiveness and abruptness of mastery of a concept that appears to cause the phenomenon.
aReading does include a restriction on when a multi-question passage can be administered. This is to prevent lengthy passages from being administered to students who are still learning to read. Based on extensive analyses, FastBridge researchers determined an ability level below which students struggle substantially with these items. Further analyses indicated that more accurate ability estimates were obtained for these students when multi-item longer passages were not administered.
Next Steps
To learn more about the aMath Assessment, take a look at the aMath Overview Article.