Those other day I was taking a running record, where you give a child a book at their independent reading level and listen to them read, taking statistics about their speed, accuracy and understanding. This has taken up all my small group time for about two weeks now. It’s mandated for data collection at our building.
Stopping the process for the purpose of planning time, I next became privy to a discussion about grouping the 4th graders into intervention groups. There were 11X18 inch sheets of paper closely printed with names of students in the 4th grade, beside their scores in the last Districtwide reading assessment. The scores ranged from 13% correct upward.
I was asked to put in my two cents on the data analysis process. “You could rank them by how many they got correct,” I said. “From highest to lowest, then split them up into properly sized groups.” But there was a sense that they’d done that before and it doesn’t work that well.
“Split them by how well they did on the individual TEKS objectives?” This wouldn’t work either, because some students had satisfied all the objectives, others none. “Istation?” i queried? But Istation scores weren’t that accurate, I was informed, for this group of students. I threw up my hands.
I stopped and thought. We had plenty of data. Yet the process of putting the students onto groups for intervention was still a thorny one.
What was the missing piece? Perhaps the problem was that the data we were provided was not designed to diagnose reading dysfunction or to group kids for remediation. A District snapshot, like the STAAR, doesn’t tell you why the child can’t find the main idea or the right definition of a word. It just gives you lists of right and wrong answers.
Later that week, when we first grade teachers were told to analyze our own class’s District snapshot for our First Grade PLC at school, we had to spend our planning time and time after school working together as a team to figure out how to fill out the forms they gave us. We got it done, but by the time we finished hours had gone by. And some had the sense that the changes we would be able to make in instruction were minimal, along the line of going over alphabetization a few more times. Not what I would call radical adjustments in instructional design.
It’s not that we don’t have any data … and no, we’re not dumb in math. We’re all college graduates! But District benchmark tests should not really be used to drive instruction. They’re designed to catch and punish schools with too many low achieving students (not, actually, a purpose I particularly appreciate either) not to help figure out what to do to help these struggling students learn.
And why do we use Districtwide tests instead of more-effective ones like running records? Well, look at how long it took me to gather those running records. And once you get them, you can’t easily amalgamate them across teachers to formulate grade level interventions groups. Quality data costs teacher time, and teacher time is money. We need to be clear here: cheap data mismatched to the task required is almost worse than no data.