Note: This assessment requires the sound to be on.
The Formative Assessment System for Teachers™ (FastBridge) Adaptive Reading (aReading) assessment is a computer-adaptive measure of broad reading ability that is individualized for each student. aReading provides a useful estimate of broad reading achievement from kindergarten through twelfth grade. The questions and response format used in aReading is substantially similar to many state-wide, standardized assessments. aReading is a simple and efficient procedure that is fully automated. Browser-based software adapts and individualizes the assessment for each child so that it essentially functions at the child’s developmental and skill level. The adaptive nature of the test makes it more efficient and more precise than paper-and-pencil assessments. aReading administers between 30 and 60 questions to students.
The design of aReading has a strong foundation in both research and theory. During the early phases of student reading development, the component processes of reading are most predictive of future reading success (Stanovich, 1981, 1984, 1990; Vellutino & Scanlon, 1987, 1991; Vellutino, Scanlon, Small, & Tanzman, 1991). Indeed, reading disabilities are most frequently associated with deficits in accurate and efficient word identification. Those skills are necessary but not sufficient for reading to occur. After all, reading is comprehending and acquiring information through print. It is not merely rapid word identification or the “barking at words” (Samuels, 2007). As such, a unified reading construct is necessary to enhance the validity of reading assessment and inform balanced instruction throughout the elementary grades. aReading was developed based on a skills hierarchy and unified reading construct.
aReading may be used by teachers to screen students and estimate annual growth with tri-annual assessments (i.e., fall, winter, spring), and may also be used two to three times a year to evaluate annual growth.
Computer Adaptive Testing (CAT)
Classroom assessment practices have yet to benefit from advancements in both psychometric theory and computer technology. Today, almost every school and classroom in the United States provides access to computers and the Internet. Despite this improved access to computer technology, few educators use technology to its potential. Within an Item Response Theory-based (IRT) Computer Adaptive Test (CAT), items are selected based on the student’s performance on all previously administered items. As a student answers each item, the item is scored in real-time, and his or her ability (theta) is estimated. When a CAT is first administered, items are selected via a “step rule” (Weiss, 2004). That is, if a student answers an initial item correctly, his or her theta estimate increases by some value (e.g., .50). Conversely, if an item is answered incorrectly, the student’s theta estimate decreases by that same amount. As testing continues, the student’s ability is re-estimated, typically by Maximum Likelihood Estimation (MLE).
After an item is administered and scored, theta is re-estimated and used to select the subsequent item. Items that provide the most information – based on the item information function – at that theta level that has not yet been administered are selected for the examinee to complete. The test is terminated after a specific number of items have been administered (a fixed-length test) or after a certain level of precision – measured by the standard error of the estimate of theta - is achieved. Subsequent administrations begin at the previous theta estimate and only present items that have not been administered to that particular student. Research using simulation methods and live data collections has been performed on aReading to optimize the length of administrations, the level of the initial step size, and item selection algorithms to maximize the efficiency and psychometric properties of the assessment.
There are multiple benefits of CAT as compared to traditional paper-and-pencil tests or non-adaptive computerized tests. The benefits that are most often cited in the professional literature include (a) individualized dynamic assessment, which does not rely on a fixed set of items across administrations/individuals; (b) testing time that is reduced by one-half to one-third (or more) of traditional tests because irrelevant items are excluded from the administration; (c) test applicability and measurement precision across a broad range of skills/abilities, and (d) more precise methods to equate assessment outcomes across alternate forms or administrations (Kingsbury & Houser, 1999; Weiss, 2004; Zickar, Overton, Taylor, & Harms, 1999).
IRT-based CAT can be especially useful in measuring change over time. CAT applications that are used to measure change/progress have been defined as adaptive self-referenced tests (Weiss, 2004; Weiss & Kingsbury, 1984) or, more recently, adaptive measurement of change (AMC) (Kim-Kang & Weiss, 2007; Kim-Kang, & Weiss, 2008). AMCs can be used to measure the change in an individual’s skills/abilities with repeated CATs administered from a common item bank. Since AMC is a CAT based in IRT, it eliminates most of the problems that result when measuring change (e.g., academic growth) using traditional assessment methods that are based on classical test theory. Kim-Kang and Weiss (2007) have demonstrated that change scores derived from AMC do not have undesirable properties that are characteristic of change scores derived by classical testing methods. Research suggests that longitudinal measurements obtained from AMC have the potential to be sensitive to the effects of treatments and interventions at the single-person level, and are generally superior measures of change when compared to assessments developed within a classical test theory framework (VanLoy, 1996). Finally, AMC compiles data and performance estimates (θ) from across administrations to enhance the adaptivity and efficiency of CAT.
Uses and Applications
aReading is intended for use from kindergarten through twelfth grades for screening. Items developed for kindergarten through grade 5 target Concepts of Print, Phonological Awareness, Phonics, Vocabulary, and Comprehension. Items developed for middle and high school grade levels target Orthography, Morphology, Vocabulary, and Comprehension. Please note, however, that the importance and emphasis on each reading domain will vary across children. Each assessment is individualized by the software and, as a result, the information and precision of measurement are optimized regardless of whether a student functions at, above, or significantly below grade level. A brief review regarding the use of aReading for Screening and Progress Monitoring is provided below.
Screening
aReading is used by teachers to screen all students and estimate annual growth with tri-annual assessments (fall, winter & spring). Students who progress at a typical pace through the reading curriculum meet the standards for expected performance at each point in the year. Students with deficit achievement can be identified in the fall of the academic year so as to provide supplemental, differentiated, or individualized instruction. The range of scores possible for aReading is between 350-750.
Progress Monitoring
aReading is used two to three times a year to evaluate annual growth. It is not intended for weekly progress monitoring. Frequent monitoring is done with the earlyReading assessment or Curriculum-Based Measurement for Reading (CBMreading). Research is ongoing to develop features that provide instructional recommendations and automate progress monitoring across skill areas.
Target Population
aReading is designed for all students in kindergarten through twelfth-grade levels.