Supporting the Development of Clear and Coherent Literacy Instruction in Schools

close up photography of colored pencils
Photo by Jess Watters on Pexels.com

I spent some time this summer drafting a policy proposal for the P2Tomorrow competition, mostly as an exercise to sharpen my own thinking around issues I’ve seen with literacy. Thanks to some great feedback from some very smart people (if you are reading this and you are one of them: thank you!), I am proud of the final result. I didn’t win, but I don’t feel so bad about that since the winners are a truly diverse and amazing collection of ideas (see the list of winners and their ideas here).

So I’m sharing my proposal with you. Please share if you find these ideas useful.

Supporting the Development of Clear and Coherent Literacy Instruction in Schools

The Problem with Literacy: It’s Not Just ELA

Is literacy a subject, or a whole school endeavor?

While defining “literacy” is tricky, especially in a rapidly changing society, most would include in their definition the ability to read and think critically and to communicate effectively. Such literacy is not developed haphazardly nor solely within one subject. It requires a school to work cohesively across classrooms to develop shared expectations, content, and practices.

Yet states label Grade 3-8 literacy assessments as “English Language Arts,” and accountability thus falls primarily on the shoulders of one content area: the ELA department. In effect, ELA is reduced to the practice of generic and shallow reading and writing skills as preparation for state assessments. Results on both national (NAEP) and international (PISA) scores for reading have flatlined for two decades. One reason is that most students receive only scattered exposure to the academic language and conceptual understandings gained from a school-wide engagement in a coherent set of literacy practices.

Though the Common Core Standards attempted to address this disconnect through promotion of literacy standards for ELA and History/Social Studies, Science, & Technical Subjects, a misconception remains in the field that the recommendation for a “balance between informational and literary reading” should be solely driven by ELA, rather than across those other content areas. This has led some educators to believe literature should now rarely be taught, a misreading reinforced by state ELA assessments skewed towards nonfiction passages.

This narrowing of the curriculum has been widely recognized since 2001. ESSA sought to rectify this by redefining what is meant by a “well-rounded education,” and including more subjects beyond the “core academic subjects” of the original ESEA legislation. ESSA also allows Title II funding to be used to help teachers “integrate comprehensive literacy instruction in a well-rounded education.”

Yet thus far states have been largely unable to clarify what it means to teach literacy coherently and effectively at the ground-level. Some school leaders and teachers continue to remain misinformed about the key shifts of their own state standards, and confusion about the meaning of literacy and its relationship to ELA and other subjects has led to a wide variety of pedagogical approaches and curricula of variable quality, complicated by layers of often contradictory state and district policies and initiatives.

A growing recognition of the importance of curriculum and the need for more effective resources is promising, but solutions must go far beyond the evaluation and adoption of higher quality curriculum. A school may adopt standards-aligned, high quality curriculum for various subjects but remain completely incoherent. What is needed are consistent and ongoing processes for collaborative planning and reflection on curriculum and literacy practices across a school.

What State and District Leaders Can Do

How can state and district leaders support school teams in developing, reflecting on, and sustaining processes that will promote literacy coherently across a school?

There are four moves that policy leaders can make:

  • Redefine literacy
  • Clarify expectations for school-wide processes for collaborative planning and reflection on literacy content and practices
  • Create a process for surveying educators and the wider public on what texts should be selected for literacy assessments, and publish that list in advance of each school year
  • Promote team — rather than individual — accountability for results on literacy assessments

Step 1 We have to begin with a redefinition of what we mean by literacy.
The ESEA, since updated under NCLB and ESSA, requires states to assess “reading or language arts” annually in grades 3-8. Despite ESSA’s expansion on a “well-rounded education,” states continue to narrowly label their assessments as subject-specific ELA (46 out of 50, according to my count). Only 6 states mention the word “literacy” in their assessment title.

It may seem like a small thing, but relabeling state assessments as literacy assessments, rather than ELA, would send a clear signal that literacy is not confined to a single subject. This could initiate a state-wide dialogue about what literacy means as a whole school endeavor.

Step 2 As a part of that dialogue, expectations should be developed for what school-level processes will support the development of shared, high-quality literacy content and practices. As a model, the International Baccalaureate standards for curriculum provide guidance for the collaboration and discussion expected between all teachers within a school. By establishing clear criteria for ongoing school-based reflection and curriculum alignment, state and district leaders can promote the idea that curriculum is dynamic and constantly in development, rather than a static item that is purchased and put in place.

Step 3 To further foster an innovative school-wide focus on literacy improvement, the state could engage multiple stakeholders in the cross-curricular selection of texts that would be on assessments the following year. By involving educators and the wider public in this process in partnership with the assessment vendor, greater focus, clarity, and transparency for what is taught and assessed would be cultivated. Furthermore, this could help level the playing field for students that need more exposure to the academic vocabulary and background knowledge required for comprehension of the selected texts and topics.

Step 4 Accountability for literacy assessments could then shift from resting solely on ELA departments to include other subjects, resisting the narrowing of curriculum that is so pervasive. One state, Louisiana, has already taken a bold step towards this by piloting assessments that blend social studies and ELA, and which assesses books that kids have actually studied, rather than random passages.

Such measures signal to schools that teaching literacy is the responsibility of a team, and can do much to counteract the prevailing headwinds of narrow and shallow test prep.

Anticipated Outcomes

What could we expect as a result of these moves?

Let’s consider a school representative of our current situation.

MS 900 is a public middle school in an urban district. The school has an alternating schedule for reading and writing, using two separate and unaligned ELA curriculum. The ELA teachers complain about the complexity of the writing program and the lack of professional development. Students complain about boring instruction. Grade-level ELA and math teams meet two times per week, and the social studies and science teams meet once per week. According to the state’s teacher evaluation system and testing data, the instructional quality varies widely across the school, with a few effective teachers, two highly effective teachers, and the rest developing.

Step 1 At a district meeting, the MS 900 staff learned about a new state initiative where the expectation would be that a whole school should work together to teach literacy, and that tests will reflect this. The administrators and teachers considered how schedules would need to change to provide opportunities for cross-curricular teams to meet regularly to discuss and plan for this new conception of literacy.

Step 2 Grade-level teams at MS 900 were rescheduled to meet 3 times a week, and each departmental team 1 time a week. The school’s support organization introduced protocols for teams to share and discuss the content and practices currently used across different classrooms. Grade-level teams also examined student work and discussed common approaches to targeting student literacy needs. Meanwhile, the ELA department determined that reading literature and writing narratives and poetry had been too long neglected, and discussed with their grade-level teams how strategies for reading and writing informational texts could be shared across the grade. The SS and science departments highlighted strategies specific to their subjects, while sharing topics and themes that could be developed across the the grade. The teachers who had more effective practices began to be recognized by their colleagues for their expertise, and other teachers requested to visit their classrooms to learn.

Step 3 When the new state survey for text selection opened up in the next year, both grade-level and departmental teams discussed which texts and topics were critical for meeting state standards, for teaching their students about the world, and for providing texts and topics that were relevant and engaging. Each team came to a consensus and submitted their selections. When the state published the texts, teachers were excited to see some of their choices reflected on the list, as well as to be introduced to new literary and nonfiction texts they hadn’t read yet but that were highly rated. Teams began planning how they would incorporate study of the selected texts into their shared curriculum.

Step 4 After two years of this process, when the state introduced new accountability measures for schools based on literacy results that bear shared weighting by ELA, social studies, and science teachers, MS 900 teachers felt prepared for the challenge, and were even eager to view the results and item analysis so they could figure out how they could work together to improve their students’ literacy abilities. Imagine that.

References

1 Cambridge Assessment (2013) “What is literacy? An investigation into definitions of English as a subject and the relationship between English, literacy and ‘being literate’: A Research Report Commissioned by Cambridge Assessment.” http://www.cambridgeassessment.org.uk/Images/130433-what-is-literacy-an-investigation-into-definitions-of-english-as-a-subject-and-the-relationship-between-english-literacy-and-being-literate-.pdf

2 Wexler, N. (2018) “Why American Students Haven’t Gotten Better at Reading in 20 Years.” The Atlantic. https://www.theatlantic.com/education/archive/2018/04/-american-students-reading/557915/
Serino, L. (2017) “What international assessment scores reveal about American education.” Brookings Institution, https://www.brookings.edu/blog/brown-center-chalkboard/2017/04/07/what-international-assessment-scores-reveal-about-american-education/

3 Shanahan, T. (2013) “You Want Me to Read What?!” Educational Leadership, ASCD. http://www.ascd.org/publications/educational-leadership/nov13/vol71/num03/You-Want-Me-to-Read-What%C2%A2!.aspx

4 King, K.V. and Zucker, S. (2005) “Curriculum Narrowing – Pearson Assessments.” 18 Aug. 2005, http://images.pearsonassessments.com/images/tmrs/tmrs_rg/CurriculumNarrowing.pdf

5 Workman, E. and Jones, S.D. (2016) “ESSA’s Well-Rounded Education.” Education Commission of the States. https://www.ecs.org/essas-well-rounded-education/

6 Kaufman, J., Lindsay, T., and V. Darleen Opfer. (2016) “Creating a Coherent System to Support Instruction Aligned with State Standards: Promising Practices of the Louisiana Department of Education.” The Rand Corporation, https://www.rand.org/pubs/research_reports/RR1613.html
Kaufman, J. & Tsai, T. (2018). “School Supports for Teachers’ Implementation of State Standards Findings from the American School Leader Panel.” The Rand Corporation, https://www.rand.org/pubs/research_reports/RR2318.html

7 Whitehurst, G.J. (2009) “Don’t forget curriculum.” Brookings Institution, https://www.brookings.edu/research/dont-forget-curriculum/
Chingos, M. M., & Whitehurst, G. J. (2012) “Choosing blindly: Instructional materials, teacher effectiveness, and the Common Core.” Brookings Institution, https://www.brookings.edu/research/choosing-blindly-instructional-materials-teacher-effectiveness-and-the-common-core/ Kane, T. J. (2016) “Never judge a book by its cover – use student achievement instead.” Brookings Institution, https://www.brookings.edu/research/never-judge-a-book-by-its-cover-use-student-achievement-instead/ Steiner, D. (2017) “Curriculum research: What we know and where we need to go.” StandardsWork, https://standardswork.org/wp-content/uploads/2017/03/sw-curriculum-research-report-fnl.pdf Chiefs for Change (2018) “Statement on the need for high-quality curriculum.” http://chiefsforchange.org/statement-on-the-need-for-high-quality-curricula/

8 International Baccalaureate (2014) “Programme standards and practices.” https://www.ibo.org/globalassets/publications/become-an-ib-school/programme-standards-and-practices-en.pdf

9 Louisiana Department of Education (2018) “Louisiana Essa Innovative Assessment Pilot First To Receive Federal Approval.” https://www.louisianabelieves.com/newsroom/news-releases/2018/07/27/louisiana-essa-innovative-assessment-pilot-first-to-receive-federal-approval.

Advertisements

Smorgasbord: NY State Test Results, Incoherency, and Teacher Shortages

NY State test results have been released: trends are positive

This year’s tests can actually be compared directly to last year’s, so inferences are slightly more valid. Statewide, ELA proficiency went up 1.9 points and math 1.1.

It will be interesting to see what narratives spring out of this. Even more interesting will be how anti-charter constituents spin the positive results from charters.

Look for all sides spinning these results in the way that suits them best.

State Education Department Releases Spring 2017 Grades 3-8 ELA and Math Assessment Results, NYSED

Speaking of measurement: How can we measure SEL?

Some interesting suggestions here from a recent design challenge:

  1. How quickly kids answer questions on an on-line test (too quickly means less self-control/engagement)
  2. Asking kids questions about a video to assess their perspective-taking abilities

Building a Modern Marshmallow Test: New Ways to Measure Social-Emotional Learning, EdWeek

It should go without saying that laptops alone do not a quality education make

You know, like, how are you actually using the laptops?

Do Laptops Help Learning? A Look At The Only Statewide School Laptop Program, NPR Ed

How we teach history depends on where we teach it

I’ve argued before that one of the biggest problems with what we teach students across our nation is that it’s completely incoherent, and we do little to nurture a collective sense of values, knowledge, and civic engagement.

Here’s that problem in action:

Virginia’s standards of learning for U.S. history to 1865 include “describing the cultural, economic and constitutional issues that divided the nation” and “explaining how the issues of states’ rights and slavery increased sectional tensions.” Alabama fifth-graders “identify causes of the Civil War from the Northern and Southern viewpoints.”

Contrast that with Delaware, where school districts set their own curriculum but a syllabus for the eighth grade suggesting what might be covered during instruction says that abolition meant that the American people could for the first time “seriously claim to be living up to their commitment to the principle of liberty rooted in the American state papers.”

In Michigan, curriculum also is decided locally, though the state’s social studies standards for the Civil War and Reconstruction in eighth grade include the instructions: “Explain the reasons (political, economic, and social) why Southern states seceded and explain the differences in the timing of secession in the Upper and Lower South.”

Civil War lessons often depend on where the classroom is, Associated Press

Teacher shortages in high needs areas, such as SPED and math, with no end in sight

One of the suggestions here for addressing this makes a lot of sense to me:

“Make teacher certification national instead of state by state. Prospective teachers must pass an exam specific to the state they want to work in. But if a teacher wants to move from, say, Pennsylvania to California, they can’t immediately apply for jobs there. By having a national certification exam, teachers would have more mobility to go where they’re needed.”

Schools throughout the country are grappling with teacher shortage, data show, CNN

One way of addressing teacher shortages in SPED: draw from the paraprofessionals

They’re already in the field. Make it easier for them to transition into teaching.

Makes sense to me. But one thing to be aware of: paras have great experience in managing behaviors and working with kids, but may not have a strong background on content.

Which is why having a strong curriculum and departmental teams that can support adaptation and implementation of that curriculum are so critical.

With principals in ‘crisis mode,’ new Washington state law taps into thousands of potential teacher recruits, Seattle Times

Success can’t be measured by one or two numbers

“Whenever you make huge decisions about complex situations based on one or two numbers, you’re headed for disaster — especially when those numbers can be gamed.”

—Mark Palko and Andrew Gelman, “How schools that obsess about standardized tests ruin them as measures of success” on Vox

We’ve questioned Success Academy’s “success” on this blog before. These statisticians bring a new lens to that question.

I don’t want to denigrate the good work that Success Academy teachers and students are doing. There are practices and systems well worth replicating and investigating in these schools. But Eva Moskowitz’s political framing and marketing of her schools as the solution to poverty is problematic.

College and Career Ready? Maybe neither

Last week, I wrote about how NY was moving to lower high school diploma expectations for students with disabilities. Since writing that post, the NY Board of Regents has voted in the law, effective immediately, which has created some confusion for principals.

I’ll admit I know little of the landscape of NY high school exit requirements, since I’ve spent my career at the elementary and middle school levels. What remains unclear to me is what a “local diploma” really means, and how it connects to a viable career, as some advocates for students with disabilities are saying (as reported in this Chalkbeat piece). I’m open to being further educated on this, if anyone out there wants to school me. But right now it seems to be a mechanism for diminished expectations for some students, while enabling adults to claim higher grad rates.

Chalkbeat reporters Alex Zimmerman and Annie Ma further report that “Todd Kaminsky, a state senator who pushed for the new graduation requirements, said the change isn’t about watering down standards, but paving the way for more appropriate, “project-based” measures for students who struggle to meet graduation requirements.”

It’s also unclear to me how reducing requirements for students with disabilities connects to “project-based” measures, as this is not an explicit component of the law itself, which you can view in an overview of on this document provided by NYSED. I’m all for performance-based assessment (which is maybe what Kaminsky meant to refer to—to my knowledge, project-based learning is a pedagogical strategy, not a form of assessment), but utilizing PBA does not require lowering expectations. If these supplanted the traditional Regents exams, I’d be all for it. But I still wouldn’t stand by reducing expectations for students with disabilities.

On Twitter, The74’s Matt Barnum challenged my thinking on high school diploma requirements:

His post provides an overview of research which suggests that stringent high school diploma requirements may have little of the expected benefits (increased academic achievement), while it can have many unintended downsides, such as an increase in drop-out and incarceration rates.

I find this research compelling and a fit rebuttal to the imposition of high standards without compensatory attention paid to providing alternative options.

But I still don’t think lowering expectations for an academic diploma for some, or any, students is the answer. A high school diploma should signify that a student is prepared to enter college.

Not all students are prepared to enter college, whether due to ability or interest. However, all students could be better equipped to begin a career.

The greatest underreported story of last year, in my opinion, is that dramatically greater numbers of students are now failing the GED. This is far more problematic than students failing to obtain a HS diploma.

Couple this with the general dearth of well designed and funded vocational programs and opportunities in the US.

Over in Kentucky, however, there is a more sane and equitable approach that does not require diminishing expectations, as Emmanuel Felton reports. In KY, they are building two tracks between what it might mean to be “college” and/or “career” ready, and this makes a lot of sense to me. Instead of devaluing a high school diploma just to allow states to claim higher graduation rates, we should be investing in alternative pathways to a career that are both viable and rigorous.

 

My current views on testing, in answer to my past views on testing

While up in Albany a few weeks ago, I was interviewed by someone from NYSED about what I might say to parents who are considering “opting out” their child from state testing. You can view the video here*.

Someone on Twitter, “WiffleCardenal,” voiced a critique to me regarding the video, in contrast to things I’ve said in the past on testing. In fact, they even tweeted quotes of my own words! I deeply appreciate that someone out there is actually listening, and willing to take the time and effort to hold me accountable to them. I have elected to respond here, since Twitter isn’t the greatest venue for nuanced discussion, especially at the end of a long day, and I also hate typing things on my phone.

This is in reference to a live chat I did back in 2012 on The Nation‘s website with journalist Dana Goldstein and educator Tara Brancato. Have my views shifted since then? I would say they have in some ways.

You know, honestly, they’re not as terrible as I thought back then. I proctor these tests each year and go through the experience of answering the questions along with my students. The questions are often cognitively demanding and require multiple reappraisals of the text in question. A few of them are duds, certainly, but having tried to write many of my own text-dependent questions since then, I’ve come to appreciate a well-written multiple choice question. Check out this post from Joe Kirby (UK educator) on the rationale for using multiple choice questions for assessment.

Unfortunately, this continues to hold true. In reaction to this, the Center for American Progress recently created a “testing bill of rights” to advocate for better aligning tests with a more meaningful purpose.

This doesn’t mean, however, that I’m opposed to having test scores factor into my own evaluation or my school’s evaluation. When scores are considered over multiple years, I think they can be an important and useful measure of teacher effectiveness. But they are extremely variable, so I would only want them to be considered alongside other data that can provide adequate context.

One of the things I’ve become more aware of over time is that while our testing and evaluation schemes are extremely problematic, if we look at the big picture, accountability and testing do bring transparency to serving populations of students that were traditionally ignored. No Child Left Behind was certainly faulty and overzealous policy — but it also brought attention to holding school districts accountable to serving students with disabilities and other underserved populations based on data. This was entirely new, and it has raised awareness.

This is why the NAACP, the National Disability Rights Network, and other national civil rights groups oppose anti-testing movements.

Yes, I continue to believe this. Test measures are only one source of data that need to be coupled with qualitative observational data and other forms of understanding. Fortunately, I do feel like our focus, at least in NYC, has shifted to better match this understanding.

To give further context on my statements on the NYSED video, I was speaking about how I use testing data, which I do every week when developing IEPs for my students with disabilities. I compile all information I have on a student, including multiple years of state test data, in-house assessment data, such as reading, writing, and math scores, GPA, attendance, psychoeducational evaluations, social histories, etc. When viewed all together, in tandem with teacher observations and student and parent interviews, I find aggregate state testing data useful!

So it’s important to understand I’m not advocating now and never have advocated for a state test score as a singular reference point to judge myself or a student. But when viewed with appropriate context, I do find state testing data to be useful. (More on how I use that to develop IEPs here.)

No, unfortunately. While I do think that test scores should factor into an account of an individual teacher’s effectiveness (only in aggregate and when considered in terms of growth, not proficiency), we’re creating incentives for competition, rather than collaboration.

If I could set the rules for how we use test scores for accountability, I would do something kind of radical: I would hold all grade-level teachers accountable for student scores on literacy tests. And I’d stop labeling them “ELA” tests and call them “literacy” tests. Why? Because if we are honest about what we’re really testing, we’d acknowledge that the knowledge required to understand complex texts comes not solely from ELA, but furthermore from science, social studies, music, art, and so forth. (More on my argument on this here).

Furthermore, I’d try to better level the playing field for all students by requiring test makers to broadcast one year in advance which texts would be tested (not specific passages, just the general title/author). I would allow parents and educators an opportunity to vote on which texts they wanted tested that year as well to make it more reflective of current interests. The reason I would do this is that this would provide an opportunity for all students to build up the requisite vocabulary and background knowledge to access a text. Right now we just give them random texts, as if every child will be bringing equivalent knowledge and vocabulary to them, which is false.

Yes, unfortunately this continues to hold true in too many schools. But this is also why I have been a consistent supporter of Common Core standards, which have become synonymous with testing in some people’s minds. Yet the Common Core standards provided us an opportunity to move away from test prep, because they are fundamentally about building student knowledge and academic vocabulary through engagement with rich and complex texts — this is the exact opposite of test prep!

This speaks to the problem of making state tests so high stakes, and why we need multiple measures, such as direct observation, to hold schools accountable. It also is the reason for why I would advocate for the seemingly radical measure, as per above, of communicating which texts would be assessed that year so that “test prep” instead would simply be about reading and studying and discussing the rich texts that were selected for that year’s assessment.

Yes, it can be inhumane when a student is several years behind in reading ability or struggles in coping with anxiety and stress.

While computerized testing brings a whole new set of problems, I do believe we should move in this direction, because with computerized testing, we can use adaptive testing that can better scale to meet a student where they are. Otherwise we end up punishing students who are struggling, for whatever reason. Unfortunately, the needs of students with disabilities never seem to be factored into test design except as a final consideration, rather than from the ground up.

But there’s another side to this, too. I think we have to ask ourselves, as a teacher, a school, and a system, how do we prepare all of our students to be able to engage with a challenging text independently? And in what ways are we sequentially building their knowledge and skills and vocabulary in order to prepare them for doing so? It is the failure to do so systematically and adequately that we are failing students who most need those skills and knowledge.

Pearson is out of the picture, in case you didn’t know. I have no idea what Questar tests will be like, though I imagine they will be comparable.

From what I’ve heard, PARCC assessments are far superior to the cheaper assessments NY decided to get from Pearson. I think we get what we pay for, and if we want better test design, we have to be willing to fund them.

Personally, I think if we’re going to just use tests for accountability purposes, then we could make them every 2 or 3 years instead of every year to save money, and they could still continue to be used for that purpose.

What would be awesome is if we could move more towards performance based assessment. There’s a great article on them in the most recent American Educator. This seems like the right direction to go in if we truly interested in assessing the “whole child.”

Well, don’t know if all of this fully says everything I would like to say about testing, but I’m seriously tired after a long week, so this will have to do.

WiffleCardenal, whoever you are, thank you holding me accountable and I welcome continued critical dialogue on these issues.

* This was after a long day of a train ride from NYC and meetings with legislators, so I apologize for my shiny face. Won’t apologize for the winter beard, however. And no, I was not paid for that interview nor given a script. As ever, I speak my own mind (or so I like to think. Certainly let me know if it ever seems like I don’t).