Introduction
All 50 states require annual assessments of students enrolled in public schools. Each state has the right to establish its own learning expectations (e.g., standards) and to require a state-specific test that measures attainment of those standards. In order to know which students might score low on the state assessments, many school districts use routine screening measures to identify those students who need additional instruction in order to meet grade-level learning goals. It is also the case that student performance on the state assessment is sometimes used as an indicator of the quality of education in a specific school or district. For these reasons, most educators are eager to identify which assessments best predict student performance on state assessments. FastBridge assessments are designed to help teachers know whether a student is at risk of not meeting grade-level learning goals.
There exists a strong body of research supporting the reliability and validity of FastBridge assessments. Details about the available correlations between FastBridge assessments state assessments are found in the document titled Correlations Between FastBridge Measures and State Tests. As of July 2018, data from state assessments in Georgia, Illinois (PARCC), Massachusetts, Michigan, and Minnesota are available. Illuminate Education, FastBridge Research Labs, and university research partners continue to conduct research that documents the relationships between FastBridge assessments and various state tests and will publish additional documentation as it becomes available. FastBridge Research Labs offers ongoing analysis of data sets from all states where FastBridge assessments are used. In the absence of concurrent data on both FastBridge and state assessments for all areas of the U.S., the National Assessment of Educational Progress (NAEP) provides an interim tool to assist educators in learning what scores students are likely to earn on state tests based on their FastBridge performance because it is the only assessment that is completed by some students in all states. In order to understand how NAEP data can be helpful, some background about current state learning standards is needed.
Role of Learning Standards
In an effort to make learning standards more consistent across all 50 states, the National Governors Association and the Council of Chief State School Officers developed and published universal reading and math standards that states could chose to adopt and use. Known as the Common Core State Standards (CCSS), 41 states, the District of Columbia, four territories, and the Department of Defense schools had adopted the CCSS as of 2018 (Common Core State Standards Initiative, 2018). With a majority of states using the same learning standards, it became possible for multiple states to use the same annual assessment for all students. In an effort to create assessments aligned with the CCSS, three organizations developed testing programs that states using the CCSS could select for annual assessment. These programs include the ACT Aspire, Partnership for Assessment of Readiness for College and Careers (PARCC), and Smarter Balanced Assessment Consortium (SBAC). In contrast to prior decades when there were up to 50 different state tests, the adoption of these uniform assessments by many states made it possible to compare student performance across different states using the same test. It is important to point out that although many states might use the same assessment, each state decides what score level indicates proficiency. For example, all states will decide which scores on the adopted state test indicate whether a student meets or exceeds the established standards. For this reason, any two states using the same test could select different scores indicating proficiency levels. Given the potential differences in state proficiency levels even when using the same assessment, there remains only one measure used in all U.S. states: the National Assessment of Educational Progress (NAEP).
National Assessment of Educational Progress
The NAEP is a biennial assessment used by the U.S. Department of Education to review math and reading performance of students in grades 4 and 8. Also referred to as the Nation’s Report Card, the NAEP is completed in odd-numbered years by a random sample of students in all 50 states. In 2018 a new analysis of NAEP data was published that examined similarities and differences in participating students’ scores in relation to all available state learning standards. This study mapped state proficiency standards onto the NAEP scales. This was done by using data from the ACT, PARCC, and SBAC assessments to complete equipercentile mapping (Bandeira de Mello, Rahman, Park, 2018). The full text of the new analysis is attached below. The goal of this analysis was to compare student performance on state assessments and the NAEP and examine trends in student performance. As shown in the data tables in the attached full analysis, not all states that adopted the CCSS used one of the three national testing programs. Where data from another state test were available, a calculation of the equivalent NAEP score in relation to the state’s proficiency levels was completed. The resulting analysis included estimates for 46 states’ reading and math scores in grade 4, and 43 states’ reading scores and 32 states’ math scores at grade 8. Some states’ data were not included in the analysis due to testing schedules or these states’ unique proficiency criteria (Bandeira de Mello et al, 2018). An interesting outcome from this analysis was that, despite the adoption of the CCSS and accompanying aligned assessments, the cut scores adopted by each state resulted in differences in the corresponding equivalent NAEP scores.
The average and range of NAEP scale equivalent scores for each of the three CCSS-aligned state assessments are shown in the following table. Although the averages across the three assessments are not hugely different, the range of scores across the states was more varied. The NAEP equivalent average score was highest for the PARCC in both math and reading. The ACT and SBAC averages were more similar, and were the same for grade 8 reading. The ACT had the narrowest score differences across the two states that used that assessment. The PARCC has the highest differences across states in both grade levels in reading and math. SBAC scores generally fell in between the ACT and PARCC equivalent scores.
Grade |
Subject |
ACT (N=2) |
PARCC (N=12) |
SBAC (N=19) |
Grade 4 |
Math |
236 (233-239) |
257 (221-260) |
245 (240-250) |
Reading |
235 (231-239) |
227 (193-240) |
227 (220-233) |
|
Grade 8 |
Math |
290 (288-293) |
300 (261-304) |
294 (289-303) |
Reading |
266 (265-267) |
275 (237-280) |
266 (262-273) |
Implications for FastBridge Benchmarks
The findings from the NAEP mapping study have implications for all assessments that seek to provide information about which students need additional instruction in order to meet their state’s learning expectations. These findings suggest that it is important for educators to understand their specific state learning expectations and how these vary across the U.S. The differences in NAEP score equivalents across states using the same assessment suggest that there are important differences in expectations of students’ skills. Although the 2018 NAEP mapping study was the first of its kind, it does suggest that states’ performance expectations vary enough that educators need to be aware of these differences when selecting and using screening assessments. Considerations when reviewing your state’s proficiency expectations include:
- Are my state’s standards low, middle, or high on the NAEP equivalents?
- What assessments have served as accurate predictors of my students’ state-assessment performance?
- How do my state’s expectations compare with the default FastBridge benchmark settings?
FastBridge Default Benchmark Settings
For each FAST assessment, there are benchmark indicators designed to show whether a student’s score suggests being at low, some, or high risk of not reaching grade-level learning goals. One application for the benchmarks is to identify students who might need assistance in order to reach the state’s expectations on the adopted state assessment. FastBridge benchmarks were developed based on prior research about predicting student performance (Goffreda, Diperna, & Pedersen, 2009). This research suggests that screening scores at the 15th percentile or lower indicate a high risk of not meeting later goals. Scores between the 15th and 40th percentiles suggest some risk and those above the 40th have generally been found to be a low risk of not reaching later goals. Since there is so much variability in states’ proficiency standards, it might be important to customize screening benchmarks so that they are aligned at a level likely to match local expectations.
FastBridge users can set custom benchmarks that are different from the defaults automatically loaded when an account is created. To create custom benchmarks, users should review the attached NAEP report and determine whether their state’s standards are low, medium, or high. For example, for FastBridge users in states that have adopted the SBAC assessment, the range of NAEP equivalent scores for fourth grade reading is from 220-233. If your state’s NAEP equivalent is at the low end of that range, then using the default FastBridge benchmarks could make sense. If your state’s equivalent score is at the high end of that range, then using higher benchmarks, perhaps set at the 50th or 60th percentile, might provide more accurate predictions from FastBridge measures. Given the body of prior research, the default FastBridge settings should be accurate for predicting student performance in states with lower NAEP equivalent scores. That said, it is possible that students in states with higher proficiency score expectations will benefit from higher benchmarks in order to provide more precise information about which students are the most likely to need additional instruction. In states with higher score equivalents, using higher benchmarks might increase the number of students identified as needing help, but could also lead to more students reaching the state’s expectations. After reviewing the attached NAEP report, if your district decides to use custom benchmarks, please contact the FastBridge Support Team at help@fastbridge.org. The Master Account Administrator will need to approve this change and then the FastBridge staff will work with your local team to adjust the benchmarks.
References
Bandeira de Mello, V., Rahman, T., and Park, B.J. (2018). Mapping state proficiency standards onto NAEP scales: Results from the 2015 NAEP reading and mathematics assessments (NCES 2018-159). U.S. Department of Education, Washington, DC: Institute of Education Sciences, National Center for Education Statistics. Retrieved from http://nces.ed.gov/pubsearch
Common Core State Standards Initiative. (2018). Standards in your state. Retrieved from: http://www.corestandards.org/standards-in-your-state/
Goffreda, C. T., Diperna, J. C., & Pedersen, J. A. (2009). Preventive screening for early readers: Predictive validity of the dynamic indicators of basic early literacy skills (DIBELS). Psychology in the Schools, 46, 539-552. doi:10.1002/pits.20396