Assessing and Supporting Word-level Reading


Photo by Pixabay on Pexels.com

In this post, I’ll continue pulling together my notes on what I’m learning about reading. Thank you in advance for reading, sharing your thinking, and helping me to connect with a broader community committed to improving literacy instruction.

I want to first draw everything back to the Simple View of Reading as a friendly reminder that reading is big.

Word-level Recognition X Language Comprehension = Reading Comprehension

I’ve only been focused on the word-level piece, because that’s the part that was so new to my own understanding. But the language piece is HUGE!

Anyway, in this post, I’ll keep with the word-level recognition side of things and focus on what assessments and programs we might be able to use to tackle just that one side of things.

It’s one thing to have a clear theory and a model; it’s another thing to act upon it. This is where the real debates begin, because at some point, the rubber needs to hit the road:

  1. What will we use to screen and diagnose code-based and meaning-based literacy skills?
  2. What will we do in our core instruction to prevent reading difficulties?
  3. What will we do to intervene when core instruction is insufficient?

This means a school needs to have an RTI model of some kind, which is a level of sophistication, unfortunately, many schools struggle with. There’s a lot more that Seidenberg, Kilpatrick, and I have to say on this topic, but in this post, I’ll maintain a narrower focus. I’d like to dig further into the RTI piece of it in a future post (I have some criticism of the model, though I’m rethinking it in light of some of my new understandings).

Every School Needs a Universal Screener

One of my favorite things about the Advanced Literacy framework that both NY state and NYC have adopted is that it promotes the need to go far beyond the data provided by a state assessment. We need universal literacy screeners–a short test that can help immediately identify kids who are behind in either code or meaning-based ability, from which we can then drill into further with diagnostics.

Because the available tests are not always ideally suited for assessing the various components of reading, the best reading assessment tool is the evaluator’s knowledge of research on reading acquisition and reading difficulties.

David Kilpatrick in Essentials of Assessing, Preventing, and Overcoming Reading Difficulties

The problem is that there are no perfect assessments. As Kilpatrick notes, “Because the available tests are not always ideally suited for assessing the various components of reading, the best reading assessment tool is the evaluator’s knowledge of research on reading acquisition and reading difficulties.

And there are so many levels to reading assessment that it’s almost fractal in nature. You’d need a significant battery of subskill assessments to get a full and accurate picture of any individual child’s reading ability.

Another problem is that time is limited, and there are already a substantial number of tests that students are forced to take. Some are in-house, some are district mandated, some are used to evaluate teachers.

Ultimately, a school must make sense of them as best they can. This is where The Simple View of Reading really comes in handy. Different assessments provide you with different kinds of information.

There’s thankfully a lot of great resources in determining what screeners your will use. The Gaab Lab at Boston’s Children Hospital has an extensive compilation of screeners here.

Assessments of Phonological Awareness

Kilpatrick recommends using both the PAST and the C-TOPP2 as a further diagnostic after a universal screener. But I recommend using only the PAST because it’s free vs. $347 for the C-TOPP2 kit. Why is the PAST test free? Because Kilpatrick developed it and publishes it for free here: https://www.thepasttest.com/ He provides instructions on the site as well. Pretty darn cool.

I’ve begun piloting the use of the PAST at a few schools I support, and it’s been pretty eye-opening to see just how much need there is with phonological awareness in the students I’ve tested. I’ve administered it to an 8th grade self-contained class, and all of the students had phonological deficits — some at the most basic of levels. One student struggled to say the word “fantastic.” He couldn’t get that last syllable, even when I slowed it down and repeated it. 8th grade.

This has only gave me a greater sense of urgency in figuring this out.

The other thing I noticed is that the person administering the PAST really has to know their phonemes. It’s surprisingly hard to do well. In order to get an accurate gauge of student ability, you have to deliver the instructions swiftly and precisely. If you slow down or stumble when saying, “Say guide. Now say guide . . . but instead of . . . /g/ say /r/,” you can easily tax the student’s working memory, and they forget which word they are supposed to use while paying attention to the phonemes you’re saying.

When training teachers to administer the PAST, I first have to ensure they can pronounce the phonemes accurately, and then deliver the tasks with swift pacing. This takes practice!

So my advice is to practice delivering the PAST with someone else, multiple times, before you administer to a student.

Check out my new Resources page for a couple of trackers you can use once you’ve administered the PAST.

Assessments of Phonics Skills

Kilpatrick recommends using the TOWRE-2 Phonemic Decoding Efficiency subtest and KTEA-3 Nonsense Word Decoding subtests. The problem is that these are subkits of a larger kit, and the kits for each are expensive. If you’ve got a school psychologist in your building who uses these and can lend you a hand, that’s great.

Protip: “Nonsense word tasks appear to be the best way to evaluate a student’s phonics skills. In essence, all unfamiliar words a student encounters are functionally ‘nonsense’ words until they are correctly identified. . . . Timed nonsense word reading, such as in the TOWRE-2 and the KTEA-3, is arguably a better assessment of a student’s cipher skills than the traditional, untimed nonsense word reading tasks.  . . . It is recommended that any timed nonsense word reading task be administered after an untimed task, and not before.”

Though Kilpatrick recommends these normed assessments, he does acknowledge that they “do not provide much information about the specifics of what elements of phonics skills are weak or missing. By contrast, there are many criterion-based assessments of very specific elements of phonic knowledge. Some are commercially available assessments and others are free online. These criterion-referenced assessments will index the particular letter-sound combinations that the student knows, such as the various letters, blends, digraphs, and diphthongs, which can aid instructional planning.”

So my (admittedly amateur) advice? Normed assessments are great if you can afford them. But you can use something like the CORE Phonics Survey, the DIBELS Nonsense Word Fluency tasks, or Ruth Miskin Nonsense Word Test (all available for free). I also just got an OG (Orton-Gillingham) phonic screen from a colleague, and it was really short. Please let me know what else you might recommend.

I’ll stop here.

There’s much more to talk about with assessments for word-level reading, but I’ll stop here. Even out of these two, phonemic awareness and phonics, I’ve elected to only focus on one — phonemic awareness. Why? Because if Kilpatrick and Seidenberg are right, this is the core area of deficit that causes word-level reading gaps. And because I’m just trying this out and seeing what kind of practices and systems I can support a school in developing that are sustainable and scalable, and you have to start somewhere.

Even just administering the PAST is a much bigger endeavor than it seems at first glance. You need to train and practice with it. Then you need to test each student individually, in a space where you have enough quiet to be heard.

And then you need to figure out how to provide effective intervention in a consistent and effective way. From the first set of data I just collected last week, I can see this will be more complicated than I thought. Each student is at different levels of phonemic awareness, so how can we group them strategically while still addressing each student’s need?

Help! If you’ve used Kilpatrick’s Equipped for Reading Success program, especially with older students, any advice is much appreciated.