Is the intervention/instruction working?
It can be easy to lose sight of the real function of progress monitoring (PM) while trying to attend to it as a requirement.
The most important reason we do PM is to know when to intensify instruction, to help students make enough progress to close the gap. This takes quality goal setting and regular review of the PM data that results in instructional decisions. Please proactively plan to take the time to look at the data to decide if the child is progressing as needed, or if a change in the package of instruction and intervention is needed. There are several features of the PM graph that can be used for a quick check of student progress. Be aware that you cannot make good decisions about improvement if the goal for the student is not set correctly. See Set Quality Goals below.
The easiest method for gauging improvement is a simple visual inspection of the trend. Is the student headed in the right direction? If the goal is correctly set, is the trend above or below the goal, and will it reach or stay above the goal if projected to the end of the school year?
Clearly, this student’s current trend is not good. This student had an intervention change that may not have been a good choice. Is there any other factor to explain this, or should the intervention be changed or abandoned?
This student’s trend prior to the intervention change showed the student losing ground. The trend after the new intervention shows the child regaining lost ground.
The icons on the testing page and on the PM detail below the graph will indicate whether the PM data point is above (upward triangle), near (square), or below the goal line (downward triangle), and by how much. At a glance, you can see that the first child on the list has taken recent downward trend, and second child on this list has not been making good progress for weeks on end.
Also in the detailed PM information you can find a summary of the goal line trend (weekly improvement needed) as well as the intervention trend lines. If the goal was set up correctly, you can compare the growth needed (goal line trend) against the observed growth (intervention trend and total trend).
Set Quality Goals
Note: Goals should be set to reach spring benchmarks for grade level. For 2% or less of students, an off grade level goal may be considered. Please see: Off-Level Progress Monitoring
The purpose of progress monitoring (PM) in early literacy is to keep an eye on the student’s progress to close the gap and become a good reader. This means we need goals that will get kids there. The default goals that are automatically set for new PM may not do the job and may need adjusting. In fact, accepting the default PM goal sometimes results in a bad goal for the child, which leads to poorly informed progress decisions. The most common errors in goal setting are accidentally "accepting" zero for the starting score and forgetting to compare the proposed goal line to the end/spring benchmark.
The function of the goal line is to show the pathway from where the child was to where the child needs to be. Note that the screening score often does not import, based on the timing of the screening and when the PM plan was created, as well as the fact that computer adaptive nor subtests from composites populate into the student's PM plan.
The first graph below looks great, but is deceiving because upon closer examination it's set up with a zero start score and incorrect end benchmark. When the same child’s progress monitoring is set up correctly, shown on the on the second graph, we see that the child’s progress is not good. Keep reading to see how to avoid this problem when setting up goals.
In the first example above, the third grade student’s PM setup did not have a start score, since he was set up off-level and prior to screening. You will see a zero for a start score if the PM measure was not used for screening, the PM was created prior to screening, or if the PM measure is off grade level. When PM was set up, the FastBridge system default weekly gain labeled "realistic" was used to set the goal, along with the zero start score. A starting point of 0 + 1.4 words/week of gain across a full school year means an end of year goal of only 52 WPM, which is woefully low for a third grade student where the end of year benchmark is 131.
The progress monitoring graph based on this goal makes it look like the child is making excellent progress (see the graph again below). The child’s trend appears to be well above the goal, and while the slope suggests the child will lose some ground relative to the goal line, it appears that the trend will still be above the goal at the end of the year. This seems like good news. Unfortunately, the end of year goal of 52 used in this graph is less than half of what is needed to reach the end of year benchmark (131). Since the purpose of PM is to keep track of whether the intervention is working (and "working" means that the child gets back on track and is above benchmark) it makes absolutely no sense to use a goal that offers such a terribly low expectation for the child. This child will NOT meet the end of year benchmark.
If anyone reads the PM graph shown above and is not aware of the necessity to check the defaults in goal setup, they would think the child was doing a stellar job, while he's actually not doing that well. To improve the graph, the goal must be changed from the default settings. If we change the starting score using the initial screening result (24), instead of zero, it is clear the child’s trend is not going to make the goal, which is now based on making an increase of 1.4 words per week. (For PM set up between windows, it may work better to enter the first PM result as the goal line start score.) The graph below shows this change. For reference, the end of year benchmark is also shown on the graph as a horizontal solid line.
Notice that the goal set by the initial screening score and “realistic” growth (weekly gain) will still not get the child to the benchmark. The goal itself must be changed in the PM setup.
The third graph (below) shows what happens when the end of year benchmark set as the the goal. In this version of the graph, it is clear that the child is not making the kind of growth needed to close in on the benchmark score. In fact, the child has been steadily falling farther behind. Instruction and intervention must change.
While editing the start and end you will notice the weekly gain value and description change on the Monitoring Schedule page of the PM setup. The weekly rate is based mathematically on the difference between the starting and ending goal, while the description (realistic, ambitious, etc.) is based on observed growth for students in the Fast data set. Any two values of the starting score, gain, and goal can be altered, and the third value will change. Lower the gain and the goal will be reduced accordingly. Raise the goal and the gain will change. You may notice that for CBMR the goal displayed in the graph doesn't always match the goal you entered - this is due to the use of adjusted goals.
Why It Matters: Words for coaches
The uncomfortable thing about this issue is that someone looking at the first graph without understanding the meaning and origin of the goal would assume the child is doing quite well, while in reality he’s falling farther behind as time progresses. The interventions currently in place are not going to close the gap for this child. When setting up the goals, do not trust the automatic settings to do the right thing for every child. It is critical to think about the goals for each child and make adjustments as appropriate. Remember that the reason we set goals and monitor progress is not to comply with the ELI law, it is to help kids to become successful readers. The good news is that we can make a profound impact on early literacy with quality instruction informed by quality data and goals. It makes sense to set goals that include a starting score, ambitious weekly gain, and a goal for spring grade level benchmark. Don’t just go through the motions - use the system to do the best possible for kids.