Choosing Assessments of Student Learning that Matter and Match

Choosing Assessments of Student Learning that Matter and Match

State boards of education today mandate that administrators use “multiple measures” of student progress to generate an overall rating of a teacher’s effectiveness but often leave the specific measures up to educators. How we select and match those measures is important. We need to choose measures that:

  • Matter—our goals determine our focus
  • Match our values—assessments drive what counts as improvement
  • Contrast—each measure captures something unique about student learning.

The subtle difference between student achievement and student growth makes a difference in what counts as success. Growth is relative, achievement is absolute, and we need to consider both.For example, one year I worked with a ninth grader newly arrived in the U.S., an emerging bilingual speaker who was just learning to read in English (and had not learned to read in her first language). Though she progressed steadily through the pertinent guided reading levels and demonstrated progress on formative assessments, her end-of-year score on the multiple-choice screening was lower than her pretest. Perhaps she understood more of the questions and made fewer random guesses than she had in the fall, or maybe she rushed through. In any case, without the growth I had documented throughout the year, this student’s year of study would have been pronounced a loss and I would have been rated ineffective. The subtle difference between student achievement and student growth makes a difference in what counts as success. Growth is relative, achievement is absolute, and we need to consider both.

This student’s case is an outlier. But outliers are real, and so are their consequences for teachers’ overall ratings. Therefore the various measures used in a teacher’s evaluation should contrast one another—balance extremes and fill in the gaps that single assessments can so easily skew. Selecting contrasting assessments also allows us to focus on some of the harder-to-measure but much more important areas of literacy growth that are ignored when we measure only relative growth or absolute achievement, not both.

A goal like increasing comprehension may resonate with our vision of literacy learning, but if the measures used to mark progress toward that goal limit what counts as comprehension, our focus narrows, as does what counts as success. Still, teachers and administrators are often under enormous pressure to select measures that are standardized, shared, or otherwise normed in order to limit subjectivity in the judgment of how much students are learning.

The good news is that after the 2009–11 media frenzy about value-added measurement, specifically its public display in places like Los Angeles and New York, there has been a general acceptance of the idea that no single measure is appropriate as evidence of learning when evaluating teacher effectiveness. Based in part on the results of the multimillion dollar Measures of Effective Teaching (MET) Project, states and districts across the country are combining imperfect measures in hopes of arriving at a more representative and reliable composite score.

Some measures will likely be decided for you, but you should be able to choose at least one measure for tracking your goals and your students’ achievement.Some measures will likely be decided for you, but you should be able to choose at least one measure for tracking your goals and your students’ achievement. Don’t be afraid to measure exactly what you care about, even if the assessment isn’t commonly used. If your goal is to increase your students’ reading volume, use reading logs and conferences to track how much they’re reading. Reading volume is a research-based measure with which to rate literacy growth. For example, you might set a goal that students in a reading support group will double the time they spend reading self-selected texts at or near their independent reading level. This could be measured by the number of minutes they spend reading independently or by the number of pages they read. The time option is more flexible, the pages option is easier to confirm and track. Given the average number of words they read per minute and the average number of words on a page of texts at their level, you can even calculate the number of words they are exposed to weekly or monthly, making the exponential growth ever more tangible.

Similarly, if your goal for students is to increase the motivation and engagement that will prompt them to read more, measure motivation and engagement! We tend to think of motivation and engagement as fuzzy social psychological constructs instead of concrete skills that are taught and learned, but there are standardized measures for both (see the links below). In whatever way you conceptualize motivation and engagement, they are massively powerful predictors of reading success and just as (if not more)important to assess than any individual skill. Choose the measures that allow and encourage you to focus your instructional attention on what you think matters most for your readers and writers.

If You'reBeing Evaluated On / Make the Results More Meaningful by Including
Your students’ scores on a test of fluency or their ability to read nonsense words / Evidence of their growth in reading engagement. For great resources, do an online search with the following search terms:
  • “Reader Self-Perception Scale”
  • “Motivation to Read Questionnaire”
  • “Motivation to Read Profile”
(See source citations in references)
Your students’ scores on a standardized test of reading comprehension / An evaluation of what students know about themselves as readers and how to use comprehension strategies
You can find one great resource through an online search of
“Curriculum-Embedded Reading Assessment”
Your students’ scores on a district benchmark test that measures their ability to summarize short, contrived passages / A measure of the volume and variety of texts your students engage with

It isn’t enough to control the goals and the direction of post-observation conversations with your evaluators. You must also be ready with suggestions for measures that support rather than limit your visions for effective literacy teaching and learning. Help them choose measures that matter and match!

References for Tools for Meaningful Assessment

Henk, W. and S. Melnich. 1995. “The Reader Self-Perception Scale (RSPS): A New Tool for Measuring How Children Feel About Themselves as Readers.”The Reading Teacher,48 (6) 470.

Schoenbach, C., C. Greenleaf, and L. Murphy. 2012. Curriculum-Embedded Reading Assessment. In Reading For Understanding: How Reading Apprenticeship Improves Disciplinary Learning in Secondary and College Classrooms. New York: Jossey-Bass.

Gambrell, L., B. Palmer, R. Codling, and S. Mazzoni. 1996. “Assessing Motivation to Read.” The Reading Teacher. 49(7), 518–533.

Pitcher, S., L. Albright, C. DeLaney, N. Walker, K. Seunarinsingh, S. Mogge, K. Headly, V. Ridgeway, S. Peck, R. Hunt, P. Dunston. 2007. “Assessing Adolescents’ Motivation to Read.” Journal of Adolescent and Adult Literacy. 50(5) 378–396.