Neena’s Top Reading Research Picks for May - MetaMetrics Inc.
Skip to main content
Post Category:
Lexile, MetaMetrics
Post Tags:
Reading Research Recap

Neena’s Top Reading Research Picks for May

Neena’s Top Reading Research Picks

Welcome to the Reading Research Recap!

I am Dr. Neena Saha, Research Advisor at MetaMetrics and founder and CEO of Elemeno, now a part of MetaMetrics. My focus is to bridge the research-practice gap so that educators can access real-time tools to support reading success. To expand the understanding of research to inform teaching and learning strategies, I put together this monthly compendium of the relevant and must-read research that impacts the reading and learning landscape. I offer research highlights in digestible summary slices. Hopefully, the data and findings you see here are useful to you as researchers, educators, and district and edtech leaders.


Assessing an Instructional Level During Reading Fluency Interventions: A Meta-Analysis of the Effects on Reading

Hi everyone! This month I chose a new meta-analysis on instructional level in the context of reading fluency because I hear this term used SO MUCH but there is still a lot of confusion around it! This paper has implications for matching readers to texts and does a great job of providing some historical background, and highlighting where there is and is not research. And, the results suggest a guideline for teachers when choosing texts to improve fluency.

TL;DR

  • Instructional level is perhaps best thought of as an interaction between a specific student and a specific text.
  • WCPM (a fluency criteria) is still best for screening and progress monitoring purposes.
  • But, an accuracy criteria of 93% – 97% words read correctly is optimal for fluency growth, instruction, and intervention for struggling/dysfluent readers.

Introduction

  • Fluency is an important skill, and we know reading fluency interventions work (i.e. have small to moderate effect sizes).
  • But there is considerable variability: we don’t really know what makes one fluency intervention more effective than another.

The Importance of Passage Difficulty

  • One factor to investigate is how passage difficulty is determined as prior research showed that passage difficulty explains 50% of the variability in fluency intervention effectiveness.

Now, you may not think that there are different ways of assessing passage difficulty, but there are!

Aside: Background on Instructional Level & Passage Difficulty

In 1946 Betts coined the term “instructional level” and defined it as a passage where a student could read 95% of the words correctly. Betts later rethought the idea of a specific numerical criterion (95%), and instead he pivoted to qualitative (vs a numerical) criterion. This signaled the first informal reading inventories (IRIs); passages that represented different grade levels.

  • Informal Reading Inventories (IRIs)
    • IRIs are assessments designed to find the instructional level of a student. Common IRIs include the Basic Reading  Inventory [BRI; Johns, 2017], Benchmark Assessment System [BAS; Fountas & Pinnell, 2007] and the Developmental Reading Assessment [DRA; Beaver & Carter, 2006].
    • The author notes, “The term instructional level has become somewhat synonymous with using an IRI to assess students and to place them into a series of books organized according to a purported gradient of difficulty (Schwanenflugel &  Knapp, 2017; Shanahan, 2011).”
    • While 43% of teachers recently reported using IRIs, there are a lot of criticisms regarding their reliability, validity, and effectiveness.
  • Accuracy Criteria
    • Soon after Betts created the first IRI, other researchers expanded the accuracy criterion from 95% to 93-97% correctly read words. Below 93% would be a student’s frustration level, and above 97% would be the independent level.
    • Instead of a universally accepted reading level standard, the accuracy criteria definition of instructional level hinges upon an interaction between the student skill level and the specific text.
    • The accuracy criteria definition of instructional level came out of intervention research as researchers were trying to determine which set of materials could provide students with the appropriate amount of challenge and thus; optimize learning.
    • There is ample evidence for reliability and validity of using accuracy criteria (see research on curriculum-based assessment for instructional design).
  • Fluency Criteria
    • Around the same time that the researcher looked at accuracy criteria for determining instructional level (see above bullet point), other researchers published criteria based on the number of words read correctly in 1 minute.
    • If a student read 70-119 words correctly in a minute, they were at their instructional level. If they were below, it was their frustration level, if they were above that, it was at their independent level.
  • Current Grade Level
    • This one is self-explanatory, though I do think that Dr. Matt Burns mentioned that not all the papers in the sample (remember that it was a meta-analysis) discussed how they determined “grade level.”

Rationale

There is little research addressing the validity or reliability of the specific instructional decisions mentioned in the third bullet point above (i.e. the 70-119 words read correctly per minute).

Therefore, it remains unknown which method (IRIs, Accuracy or Fluency) is best for students when it comes to determining instructional level and selecting passages to optimize student growth.

Research Question

The main question was: which method/approach to determining passage difficulty (IRIs, Accuracy, Fluency, or current grade level ) is most effective (most optimal) for student reading fluency growth?

Sample (of papers)

A literature search yielded 21 studies that conducted a reading fluency intervention and reported the difficulty level of the text (among other inclusion criteria). 17 of the studies were with children K-5th grade.

Methods

The effects were aggregated via a random effects meta-analytic model.

Results

This study found that the accuracy criterion (not the IRIs, fluency criterion, or grade level) was best:

“…using the instructional level criterion of 93%  to 97% correctly read words led to a large weighted effect g = 1.03 (95% CI = 0.65 to 1.40) that  was significantly different (p < .01) from 0 and was the largest effect noted among the four  approaches for estimating difficulty.”

Interestingly, prior research has shown that fluency criteria is best for screening and progress monitoring, but this paper suggests that accuracy criteria is best for intervention.

Other Results

  • Leveling systems like IRIs had small effects. This aligns with prior research showing a lack of validity of IRIs for instruction or intervention.
  • Fluency interventions like repeated reading and continuous reading work! Repeated reading is effective whether conducted 1 on 1, in dyads, or even classwide.
  • Analyses showed that the fluency interventions were most effective for students with reading disabilities, but there were small effects for students without risk, and students at-risk.

Limitations

  • The studies used students mostly in elementary school grades, so results cannot be generalized to other grades.
  • Researchers were not specific in their reporting of criteria for a reading disability.
  • More research is needed for students who are average or above-average reading skills.
  • Small sample size (only 21 studies).
  • Single-case study effect sizes were aggregated with group-design studies (and researchers are still figuring out the best way to do this).

Practical Implications

The take-home message for teachers is that when selecting passages for struggling readers for fluency interventions, teachers can use the accuracy criterion of 93% – 97% words read correctly:

“…practitioners could identify reading passages for reading fluency interventions by  recording the percentage of words read correctly and comparing it to the accuracy criterion of  93% to 97%.”

I also found this important to relay:

“The instructional level continues to be a confusing construct that is often conflated with a measurement system that has been consistently refuted by research (i.e., IRIs). The current data suggest that the instructional level is a construct worth additional research within the context of  reading fluency interventions, and that perhaps the instructional level is an interface between individual student skill and text complexity that could be assessed with a sampling of percentage of words read correctly.”

Note About Feasibility of Implementation 

I reached out to the author, Dr. Matt Burns, because I had concerns about the feasibility of implementing this in a classroom (in an app or digital program it can be easier to build in learning pathways that incorporate specific decision making criteria, but what are teachers supposed to do in a classroom setting when choosing texts?)

He said that for more typical readers, just let them go and keep an eye on them. And that the criteria discussed in this paper is really for struggling/dysfluent readers, so you would really only have to do this for about 5-10 (hopefully!) students in a class.


Bonus: Coverage of The Reading League Summit 2024 in San Diego

I was lucky enough to be among a thousand reading and education enthusiasts who convened at the Town & Country Resort in sunny San Diego on April 27th.

Photos from the Reading League Summit 2024 in San Diego

Mission of the Summit: 

The goal of the summit was to share knowledge and expertise on how to help English Learners and Emergent Bilinguals as laid out in this joint statement (developed with respected organizations such as TESOL, WIDA, and the Center for Applied Linguistics). I know very little about this area, so I welcomed the chance to geek out among fellow word nerds.

Keynote: Stanislas Dehaene

The keynote talk was by Stanislas Dehaene, a French Neuroscientist and author of the wonderful book Reading in the Brain and How We Learn. He shared so much fascinating data (and beautiful fMRI brain images) from his experiments over the years. Here were some cool things I learned:

  • This blew my mind: every 1 & 2nd grader in France takes a battery of assessments called EvalAide which tells teachers which students are struggling!
  • He said Whole Language has basically disappeared in France, but balanced or mixed instruction is still widespread.
  • They did a survey and found that the curricula used still had vestiges of balanced literacy (he used the term “manual”) and that only 4% of teachers used manuals that were truly based on the science, and that did not contain aspects of balanced literacy.
  • He and a colleague did a study and found that students did better when teachers used the 4% of manuals (aka curricula) that did not have aspects of balanced literacy!! (But…this might not hold for languages like English that have a different orthography!)
  • He doesn’t stay in the ivory tower: his lab developed the Kalulu program to help children learn to read.
  • There is a highly dedicated brain system or network attuned to spoken language that can be seen in infants as young as 2 months old!!
  • This exact same system for spoken language is recycled and repurposed later for reading.
  • There is evidence for multiple dyslexiaS or subtypes of dyslexia:
    • Those who struggle with phonology.
    • Those who struggle the pairing or graphemes and phonemes.
    • Those who struggle with the visual code ~.
    • I wish he would have said more about how to intervene for these subtypes (would we do something different or is structured literacy best for all subtypes?).
  • He said something beautiful and brilliant about time being (becoming?) space as we read written language, but I can’t quite do it justice and I forgot to write it down immediately…my bad!

Panel #1: Neuroscience and Research: The Knowledge Base

  • Claude Goldenberg, Nomellini & Olivier Professor of Education, emeritus, in the Graduate School of Education at Stanford University
  • Ioulia Kovelman, Professor of Psychology, University of Michigan
  • Young-Suk Kim, Professor and Senior Associate Dean at the School of Education, University of California, Irvine
  • Magaly Lavadenz, Leavey Presidential Endowed Chair in Moral and Ethical Leadership & Executive Director, Center for Equity for English Learners (CEEL)

My biggest takeaway(s):

  • It is ok to focus on teaching words in isolation (vs. in context) when working on decoding skills.
  • Explicitly point out similarities and differences in letter-sounds between a native language and English in order to help emergent bilinguals learn English.
  • Provide vocabulary (meaning) support when teaching decoding to English language learners.

Panel #2: Assessment: Equitable Assessment Data to Understand EL/EB Student Needs

  • Lillian Durán, Professor, Department of Special Education and Clinical Sciences, University of Oregon
  • Jeannette Mancilla-Martinez, Associate Dean for Academic Affairs and Graduate Education, Associate Professor, Vanderbilt University
  • Deni Basaraba, Instructor, College of Education, Education Policy and Leadership, University of Oregon
  • Linda Siegel, Professor Emeritus, The University of British Columbia
  • Margo Gottlieb, Co-Founder and Lead Developer, WIDA at the Wisconsin Center for Education Research

My biggest takeaways:

  • To the extent possible, assess emergent bilinguals and English learners in both languages!!
  • Assessments should be accurate, reliable (check those technical manuals!), and include content that is culturally relevant and familiar.
  • Assessments should also be fair (free of bias) — there are empirical ways of assessing this!
  • One newish development in testing is the idea of conceptual scoring — an assessment feature that allows EL/EBs to provide a response in either language and have it count (I was surprised to learn that is not the norm!).
  • There’s also work being done to create assessments that incorporate conceptual scoring of responses on a single unitary scale (including English + Spanish).

Panel #3: Connecting Research to Classroom Practice: Using the Knowledge Base and Assessments to Inform Classroom Practice

  • Elsa Cárdenas-Hagan, President of Valley Speech Language and Learning Center
  • Magdalena Zavalia, Co-Founder, Intelexia
  • Laurie Olsen, National Committee for Effective Literacy
  • Jeremy Miciak, Center for the Success of English Learners
  • Francesca Smith, Dual Language and Literacy Instructional Coach

My biggest takeaways:

  • This site is super helpful for MTSS for EL/EBs and has a rubric to help schools judge where they are at in terms of best practices.
  • There was a lot of talk about how hard it is to translate the research and actually implement it…what does an ideal literacy block look like?
  • I loved how Dr. Francesca Smith gave concrete teaching practices like using cross-linguistic references in an explicit and efficient way, allowing for many repeated practice opportunities for phonology and oracy.
  • She talked about ways she adapted and chunked texts, and pre-taught words so that all learners can access grade-level text.

I’m bummed I was not able to attend the 4th panel but I heard it was amazing!

Afterparty

The conference afterparty was hosted by Express Readers ~ creators of fun, engaging decodable books. Express Readers was an inaugural partner in MetaMetrics’ tool, Lexile® Find a Decodable Book, which launched in the spring of 2023.

The party got pretty out of control…just kidding! We all sat around the firepits quietly reading decodables 😉

Photos from the after party at the Reading League Summit 2024

Overall, it was such a great summit, I learned a lot from the panels, met so many new people, and connected with old friends. It was really invigorating to meet so many appreciative readers of the Reading Research Recap!


Additional Research of Interest

Teacher Knowledge, Professional Development, Policy, Etc. 

Dyslexia & Struggling Readers

Alphabetics, Phonics & Phonological Awareness

Fluency

Comprehension

Writing

Other


Want to start receiving monthly notifications for this series?

Please register or sign in to your Lexile® & Quantile® Hub account and join our Reading Research mailing list.