The education field as a whole has acknowledged the importance of understanding and attending to text complexity for all students. At the same time scholars have articulated the unique features of early-reading texts. Early-reading texts come in many varieties; and authors pay special attention to factors that facilitate comprehension like the use of high frequency words, tightly controlled decoding demands and the use of repetition and patterning. These unique and varied text types help young readers cultivate a robust set of word recognition strategies and reading skills.
The problem we sought to fix.
Unfortunately, the quantitative leveling systems did not adequately account for these unique factors. While quantitative, algorithm-based systems had limitations, subjective leveling systems that were based solely on human ratings had drawbacks as well. The authors of the Common Core recognized the need for quantitative text complexity tools and encouraged organizations like ours to make improvements. And that’s what we did.
The solution.
We sought to address this need for quantitative, early-reading text complexity tools through a research and development project spanning five years. In addressing the need, we incorporated the best from both automated text complexity systems (i.e. text complexity algorithms) and the wisdom of teachers and reading specialists. We collaborated with several prominent early-literacy researchers, and our studies involved both judgments of text complexity by nearly 100 early-reading educators, as well as student performance on assessment tasks designed around authentic early-reading texts. Starting from a strong theoretical frame of what’s important for young readers, we combined our rich experimental results with the latest techniques in computational linguistics, psychometrics, and machine learning.
The Result.
The result is that we have advanced the science behind measuring text complexity. Now we are evaluating and measuring text complexity across more dimensions, including syntactic and semantic factors as well as the decoding challenges and the degree of repetition and patterning common in many early-reading texts. In addition to a Lexile measure, new information called “early-reading indicators” are also offered for early-reading books. Early-reading indicators can help identify which aspects of a text may be more or less of a challenge to a reader. For example, a text with low decoding demands (i.e., many easy-to-decode words) could be selected for a student who is ready to apply their knowledge of basic sound and letter relationships and patterns to practice reading books on their own. We are happy to say that over 105,000 books that have early-reading indicators and more than 81,500 books have updated and more precise Lexile measures.
Lexile measures can now be used to better encourage and accelerate reading growth in the early grades. This innovation meant a company-wide effort to help our partners implement these enhancements and bring them to the million of customers they serve.
The question we’ve been getting a lot: How do Lexile measures now compare with Fountas & Pinnell, the leveling system used in many K-5 classrooms?
The two leveling systems are complementary. Lexile measures can supplement and enhance the text selection process for Fountas & Pinnell users. A high correlation exists between the two leveling systems meaning they tend to rank books similarly in complexity. There is also some overlap in the text characteristics evaluated, and both take into account teacher judgments and text features, but in different ways.
There are also a few differences to note that help make Lexile measures a great supplemental metric. Lexile measures provide more information about a text’s complexity than Fountas and Pinnell Levels. In addition to the greater precision or resolution of Lexile measures than Fountas and Pinnell levels Lexile measures are now accompanied by early-reading indicators that can identify which aspect of a text may present more or less of a challenge to a reader. There can be a lot of variability in terms of which text characteristics are contributing to the text complexity of a book in the K–2 space. By offering more precise measures and these early-reading indicators, teachers and reading specialists can gain insight into the different challenges presented by different texts even at the same Lexile or Fountas & Pinnell level. To use a sports analogy, both leveling systems will get you to the same section of an arena, but Lexile measures will get you to your seat.
Another way Lexile measures can provide additional information is in terms of monitoring student growth. Unlike the Lexile Framework, the Fountas & Pinnell leveling system does not use an equal-interval scale. That is to say, the amount of growth needed to go from A to B is not the same as M to N. However, a Lexile difference of 10L is the same regardless of its location on the scale (e.g., 50L vs 60L vs 70L). An equal interval scale, like the Lexile scale, permits meaningful monitoring of growth in reading ability throughout a student’s academic career.
Want to learn more?
For education companies and publishers who want to learn more about this latest innovation, visit metametricsinc.com/beginning-readers.
For educators who want to learn more about better matching readers to text, visit lexile.com/beginning-readers.