Matching Readers to Text and Instruction

Reading is an experience that is unique to each individual who embarks on a journey with text. To put reading skills into a series of linear steps, where skill based instruction becomes the basis for the reading classroom, robs young readers of the experience of discovery and growth in and with text. As educators, we must be aware of this occurring in our classroom. As I read over the literature focused on matching readers to text and instruction, four key points come to the forefront of my mind:

  • Interest and readability of text must be considered when selecting instructional materials.
  • The textbook must be considered an additional tool in the content area classroom, not the basis of instruction.
  • Instruction must be student focused, not program focused.
  • Independent reading must be highly valued and treated as a vital component of the reading block.

As we design lessons and gather materials, we must be aware of our students’ reading levels. If content area text is too difficult, students will be hung up on decoding and comprehension, never reaching the level of synthesis required to fully understand the text. In an inclusive classroom, where the teacher is responsible for differentiation, a one-size-fits-all text will not work for a diverse group of learners (Allington, 2002). With textbooks typically being at least one or two years above grade level, we need to view the textbook as a resource, not the cornerstone of instruction (Allington, 2002). Supplemental instructional materials that are of interest to students and aligned with their reading levels ensure success in the content area classroom. While reading instruction can, and should, occur across all content areas, we need to focus on making our students successful in the content area. This success leads to confidence gains and reading level improvement, which allows students to move themselves through their ZPD to higher level texts. Success equals more success.

In selecting reading and instructional materials for the classroom, we must consider the students we are teaching, not the program that came in a box (Allington, 2002). Reading programs are full of passages that are of little interest to students and at inappropriate reading levels. In a study of 153 reading programs conducted by the What Works Clearinghouse, only one program was found to have “strong evidence” that it improved reading skills in students (Allington, 2013). So why do educators continue to use these basal reading programs, ignoring the students in front of them who are drudging through the text and being taught to hate reading? These programs push the teaching of isolated skills, not the broad idea of reading. Lower level readers are forced to spend more time with worksheets and isolated skill tasks, while higher level readers are offered more time for independent reading, the one activity which actually does improve overall reading abilities (Allington, 2013). This leads to continuous struggles for lower level readers, while the higher level readers continue to soar. The reading curriculum needs to contain more meaning focused lessons for all levels of readers, using interesting and accessible text on the student’s level, not just isolated skill lessons (Allington, 2013). This change will increase performance and success of all learners.

As we examine the changes needed in the classroom in relation to reading instruction and text selection, one common theme continues to surface: opportunities to read. Allington (2009) states that independent reading has the ability to self-teach students all of the major reading components (as cited in Allington, 2013). Why then do we limit, or even eliminate, the process of independent reading in the classroom? This element of the instructional day should be given top priority. If we expect struggling readers to improve, we must give them opportunities to practice. We must give readers the opportunity to read without concern for error or “right ans wrong” answers. We must teach students that reading is an opportunity for discovery and exploration, a door through which they can learn new things and experience fantastical journeys. Applegate, Quinn, and Applegate (2006) inspire this change in how we approach reading by stating “to teach literature as a series of questions with right and wrong answers is to treat it as content rather than as a literary work to be thought about and interpreted” (p.48). Let us teach students how to be thoughtful and use their experiences to interpret what they read, synthesizing it with their schema, to create new thoughts and change the world.

Allington, R. L. (2002). You can’t learn much from books you can’t read. Educational Leadership, 60(3), 16-19.

Allington, R. L. (2013). What really matters when working with struggling readers. Reading Teacher, 66(7), 520-530.

Applegate, M. D., Quinn, K. B., & Applegate, A. J. (2006). Profiles in comprehension. Reading Teacher, 60(1), 48-57.

Standard

Informal and Formative Assessments

While teachers buck against the trend of increasing required assessments, there are answers out there to support us in developing targeted lessons based on our students’ needs that don’t involve long hours in front of computers answering multiple choice questions.

Teachers are currently caught in a conundrum. School hallways are filled with the buzz of anti-assessment, but is it really assessment that’s the issue? Or is the real issue the type of assessment we’re forcing on students? Why use tests that take hours to complete and months to report back, making them stressful on students and useless to teachers in the classroom? All these tests are doing is judging students, and teachers, on a single snapshot of a single day. The anti-“test”, informal assessment, can be used in the classroom to monitor student progress in a variety of skill and content areas. These assessments provide nearly immediate data, can be tailored to assess specific standards or skills, and take little, if any, time away from instruction. Observation, checklists, rubrics, and self-assessments are types of informal assessments available to teachers (Fisher & Frey, 2010). Then there’s the argument that we don’t have enough time in our day to do all of these assessments. I argue that we really don’t have time not to complete assessments, it’s just a matter of choosing and utilizing appropriate assessments. Portfolios, another form of informal assessment, provide more thorough and accurate data than standardized assessment (Alvermann, 2011). So why are we still spending so much time on standardized assessments? That’s a question for powers above me, but I will continue to use informal assessments in my room as valid and reliable data to guide my instruction.

Along the same line as informal assessments are formative assessments. These assessments utilize a constant input/output model that is at the heart of effective teaching (Black & William, 1998). Long a believer in the growth mindset and constant improvement model, Black & William’s (1998) article made absolute sense in my mind as a teacher. Why aren’t we using these formative assessments more in the classroom? Why aren’t we involving students in the process, having open discussion, and providing them with a chance to improve before assessing, grading, and knocking them down? If we are constantly providing feedback and guidance, students will see their strengths and weaknesses and understand their unique direction towards improvement. We can’t teach by testing, we have to teach by learning. Learning about our students, and learning with our students. This “short-cycle, frequent assessment” (Torgesen & Miller, 2009, p.31) model provides the basis for a growth mindset classroom; a place where students are invested in their own learning, understanding where they are and where they need to go next. Without formative assessment and open communication with students, all our children will ever see is the end goal looming ahead of them, scary and imposing. Formative assessment breaks their learning into smaller chunks, with us as the cheerleader along the way saying, “Yes, you’ve got this,” or, “Let’s try this part again.”

Overall, assessment is not the issue in schools today. It’s the type of assessment being administered, and the way the assessment data is being used. Constant testing on unreliable and invalid assessments is a useless waste of classroom time. However, informal assessments, informal reading inventories, and formative assessments are a valuable, reliable, and justified uses of classroom time. These assessments can become part of instruction in meaningful ways. They increase student ownership over learning and teacher understanding of student achievement and needs.

Alvermann, D.E., Phelps, S.F., & Gillis, V. R., (2011). Content area reading and literacy (6th ed.). Boston: Allyn & Bacon.

Black, P. & William, D. (1998). Inside the black box: Raising standards through classroom assessment. The Phi Delta Kappan 80(2). 139-144, 146-148.

Fisher, D. & Frey, N. (2010)Enhancing RTI: How to ensure success with effective classroom instruction and intervention. Alexandria, VA: Association for Supervision and Curriculum Development.

Torgesen, J. K. & Miller, D. H. (2009). Assessments to guide adolescent literacy instruction. Florida Center for Reading Research Center on Instruction.

Standard

Assessment – Reflection

Assessments are a hot topic in education. Ask any educator about assessment and you’ll need to be prepared for the rant that will follow, or a massive eye roll at least. This buzzword, more specifically ‘high stakes assessment’, seems to have the educational world in an uproar. While the majority of the population doesn’t concern itself with questions of validity and reliability, educators place a heavy weight on these words. Our jobs are on the line because of them. Yet, there is no research that supports this situation. If anything, there’s the opposite – research that supports the end of our current use of high stakes assessment. The National Reading Conference policy brief clearly states “no research has been conducted that demonstrates a cause and effect relationship between increased high stakes testing and improvement in reading achievement scores” (Afflerbach, 2004). So why do we continue? Even more concerning is the relationship between high stakes testing and modern curriculum development. Boudett (2013) provides evidence of ‘gaming the system’, where schools teach to the test with such fidelity that their scores raise dramatically above grade level. But, when the assessment is changed, student scores drop dramatically back to their actual grade level. How does this information not stop administrators in their tracks? How does this not make state representatives shout from the rooftop of the DOE that something is WRONG?! Teachers have been saying for ages that expectations are too high when it comes to assessment results. How can a fifth grader be required to reach a grade equivalency level of 6.5 to be considered proficient? This doesn’t make sense! Until you look at how schools are using assessments to align curriculum. If it’s not on the test, throw it out. If it’s not part of the report that holds our jobs on the line, stop funding it. This pattern hasn’t led to increased learning, it has led to inflated test scores (National Reading Council, 2011). This leads to less teaching, and more test prep. The National Reading Conference report points out that “when testing concerns override teacher professionalism, curriculum decisions may be made according to how well reading instructional materials mirror a test format, and not according to accomplished teachers’ knowledge” (Afflerbach, 2004).  We’ve become a nation of teaching to the test, because the test rules the nation. Teachers no longer have the freedom to teach what is best for our students because of these assessments. The evidence presented on FAIR, DIBELS, and other assessments shows several shortcomings of these assessments. There is no one single assessment which accurately and fairly assesses students as aligned with developmentally appropriate expectations and standards. So, instead of losing large chunks of our vital teaching time to these numerous assessments, policy makers should design or locate a single valid and reliable assessment which accurately measures our students’ abilities to master reading skills across a global perspective (Invernizzi, 2005). My favorite piece of writing from the readings came not in a formal piece, but in a sticky I found attached the the National Reading Council policy brief. This edit suggestion read, “I think we should include, as an example, that none of these assessments measure the reading that takes place in online environments. Yet, increasingly, this type of reading is important to our lives and will be central to our students’ future.  Because of this, classroom reading curriculum has been slow to recognize instruction in online reading skills such as search engine use, critical evaluation of information, or navigation between web resources, reading skills essential to life in a global economy in an age of information” (Afflerbach, 2004). Bingo my friend, bingo.

Afflerbach, Peter. (2004). National Reading Council policy brief: High stakes testing and reading assessment. Retrieved from https://usflearn.instructure.com/courses/1062264/files/46874487?module_item_id=8549293.

Boudett, K. P., City, E. A., & Murname, R. J. (2013). Data Wise:  A stepbystep guide to using assessment results to improve teaching and learning. Cambridge, MA:  Harvard Education Press.

Invernizzi, M., Landrum, T., Howell, J., & Warley, H. (2005). Toward the peaceful coexistence of test developers, policymakers, and teachers in an era of accountability. The Reading Teacher, 58(7), 610-618

National Research Council. (2011). Incentives and test-based accountability in education. Washington, DC: National Academies Press.

Standard