Teacher Question
I’m writing to you about high school progress monitoring for reading comprehension. Our school has learning goals for Reading Comprehension. Every two weeks, students read an on-grade level passage and answer 5 multiple-choice questions that assess literal comprehension and main idea. Our data are not matching well with other data that we have (such as course passing rates and state assessments). What might be a more effective progress monitoring process, that go beyond the literal level, and that would provide information the teachers could use to improve instruction.
Shanahan’s Response
I’m not surprised that approach is not working. There is so much wrong with it.
First, why test students so often? Does anyone really believe (or is there any evidence supporting the idea) that student reading ability is so sensitive to teaching that their reading performance would be measurably changed in any 10-day period. Performance on measures like reading comprehension don’t change that quickly, especially with older students.
- I don’t think it would be possible to evaluate reading comprehension more than 2 or 3 times over an entire school year in the hopes of seeing any changes in ability. It is unlikely that students would experience meaningful measurable changes in comprehension ability in shorter time spans. The changes from test-to-test that you might see would likely be meaningless noise – that is test unreliability or student disgust. Acting on such differences (changing placement or curriculum, for instance) would, in most cases, be more disruptive than helpful.
I get why we seek brief, efficient assessments (e.g., a single passage with 5 multiple-choice questions). Let’s not sacrifice a lot of instructional time for testing. We have such dipsticks for monitoring the learning of foundational skills (e.g., decoding, alphabet knowledge) with younger students and it would be great to have something comparable for the older ones too.
Unfortunately, reading comprehension is more complicated than that. To estimate reliably the reading comprehension of older students takes a lot more time, a lot more questions, and a lot more text. That’s why typical standardized tests of reading comprehension usually ask 30-40 questions about multiple texts – and texts longer than the ones that your district is using.
How many questions does a student have to answer correctly to decide he/she is doing well? Remember, guessing is possible with multiple-choice questions, so with only 5, I’d expect kids to, by chance, get 1 or 2 correct, even if they don’t bother to read the passages at all. There is simply no room in that scenario to either decide that the student is doing better or worse than previously or to differentiate across students. If a student got 2 items correct last testing, and this week he gets 3, does that mean he showed progress?
- Reading comprehension question types are not useful for determining instructional needs. Studies repeatedly find no meaningful differences in comprehension across categories like literal, inferential, or main idea categories. If a text passage is easy for students, they usually can answer any kind of question one might ask about it; and, if a passage is hard (in readability and/or content), students will struggle to answer any of the question types. That means there is no reason to either limit the questions to literal ones or to shift to a different questioning regime. In fact, doing so might focus teacher attention on trying to improve performance with certain types of questions, rather than on decoding, fluency, vocabulary, syntax, cohesion, text structure, writing, and other abilities that really matter.
- The measurement of readability or text difficulty is not as specific or reliable as you might think. Look at Lexile levels, one of the better of these tools: texts that Lexiles designate as grade level for high school freshmen are also grade level for students in grades 5-8 and 10-12. This kind of overlap is common with readability estimates, and suggests that passages judged to be 1200L will differ in the difficulties that they actually pose for students. Kids might be more familiar with the vocabulary or content of one text or another which can lead to dramatic outcome differences from assessment to assessment. That’s why the standardized comprehension tests not only pay attention to readability ratings but evaluate combinations of specific passages to make sure that those combinations are going to provide sufficiently accurate and reliable results.
Things to Try…
I would suggest that you test students twice a year (at the beginning of each semester) with a more substantial validated reading test. To monitor more closely how students are performing with what is being taught.
For example, one valuable area of growth in reading comprehension is vocabulary. Keep track of what words are being taught in the remedial program and monitor student retention of these words.
Or, If you are teaching students how to break down sentences to make sense of them, then identify such sentences in the texts students are reading to see how well they can apply what is being taught. The same kind of monitoring is possible with cohesion and text structure.
My point is that since you cannot provide the kind of meaningful close monitoring of general reading comprehension that you would like, instead monitor how well students are doing with the skills and abilities that you are teaching – that should provide you with some useful index of progress.