AERA 2011 9

Procedural Flexibility Matters for Student Achievement: The Relationship between Procedural Flexibility and Standardized Tests

Kelley Durkin

Vanderbilt University

Bethany Rittle-Johnson

Vanderbilt University

Jon R. Star

Harvard University

While conceptual and procedural knowledge have been the focus of much debate and discussion for years in the mathematics education community, flexibility in problem solving has only recently begun to be examined and measured more rigorously (Star, 2005). With this increased focus on procedural flexibility, we can begin to investigate how flexibility may relate to other constructs, including standardized measures of student achievement. The current paper provides empirical support for the idea that procedural flexibility is important and is a construct related to standardized tests, even though it is seldom measured on them.

Importance of Flexibility

Flexibility is acknowledged as an important component of student learning in mathematics. It has been stated that “A fundamental characteristic needed throughout the problem-solving process is flexibility. Flexibility develops through the broadening of knowledge required for solving nonroutine problems rather than just routine problems,” (Kilpatrick, Swafford, & Findell, 2001, p. 126). In addition, the National Mathematics Advisory Panel (2008) and Common Core State Standards for Mathematical Practice (2010) encourage students to become flexible problem solvers because flexibility is considered a key component of proficiency in math. For example, empirical studies have shown that people who develop procedural flexibility are more likely to use or adapt existing procedures when facing unfamiliar problems and to have a greater understanding of domain concepts (e.g., Blöte, Van der Burg, & Klein, 2001; Carpenter, Franke, Jacobs, Fennema, & Empson, 1998; Hiebert et al., 1996). Flexibility is also important for students and teachers because flexibility is a crucial component of expertise in problem solving (Dowker, 1992; Star & Newton, 2009).

While flexibility is acknowledged as an important piece of student learning, previously it was rarely measured or examined as an instructional outcome (Star, 2005). In addition, standardized tests of student achievement often include sections on concepts, procedures, and problem solving, but they never include measures of flexibility. On standardized tests, students are not asked to solve problems in more than one way or to evaluate the benefits of one strategy over another. However, in recent years, researchers have begun to assess procedural flexibility as a separate instructional outcome that is related to conceptual and procedural knowledge (Star, 2007; Verschaffel, Luwel, Torbeyns, & Van Dooren, 2007).

Defining and Measuring Flexibility

There are some differences in how researchers define flexibility. The simplest definition of flexibility is that it relates to knowing more than one strategy for solving a particular type of problem (e.g., Heirdsfield & Cooper, 2002). The most complex definition is that flexibility is tied to strategy adaptivity -- being able to use a variety of procedures and being able to use information from the problem context, the learner’s environment, and the sociocultural context to select the most appropriate problem solving procedure (e.g., Verschaffel et al., 2007). Lying between these two definitions is what may be the most common definition, and the one we use here, which is that flexibility involves knowing multiple strategies and their relative efficiencies and adapting strategy choice to specific problem features (e.g., Blöte et al., 2001; Kilpatrick et al., 2001; Rittle-Johnson & Star, 2007).

Even with this common definition, however, the flexibility measures used in studies do not often closely align with the definition, and flexibility is measured in a variety of ways. For the purposes of this paper, we separate the construct of procedural flexibility into flexibility knowledge and flexible use of procedures (Rittle-Johnson & Star, 2007; Rittle-Johnson, Star, & Durkin, 2009; Star & Rittle-Johnson, 2008, 2009). Flexibility knowledge is defined as knowing multiple procedures and the relative efficiency of the procedures. This can include ability to implement multiple strategies when prompted, to recognize multiple strategies, and to evaluate the value of different strategies for specific problem types (see Table 1). Flexible use of procedures is defined as students picking the most efficient strategy depending on problem features (see Table 1). It is important to examine both flexibility knowledge and flexible use because students can sometimes identify a more appropriate procedure for solving a problem before they actually choose to use it (Blöte et al., 2001; Siegler & Crowley, 1994).

Current Paper

While flexibility is acknowledged as an important component of mathematical competency, little empirical work has investigated the relationship between flexibility and standardized tests. The current paper investigated these relationships by addressing the question of how flexibility relates to standardized test measures of student achievement, and how this relationship might differ between norm referenced national tests and criterion referenced state tests. To answer this question, we report on data from four studies that have measured flexibility, conceptual knowledge and procedural knowledge independently, and included standardized test data from the participating schools. The relations between these measures have not been previously reported. We use meta-analytic techniques to estimate effect sizes across studies.

In particular, the studies have all focused on flexibility in the domain of equation solving with students in Grades 7 and 8. The included studies involved the domain of algebraic linear equation solving because it is a “basic skill” according to many researchers in mathematics education and is recommended as a Curriculum Focal Point for Grade 7 (National Mathematics Advisory Panel, 2008; NCTM, 2006). However, even though equation solving has been identified as an important skill, students do not learn flexible ways to solve equations and often simply memorize rules (Kieran, 1992). Each study used a pretest-intervention-posttest design, and all of the interventions involved learning conditions meant to promote flexibility, such as having students compare multiple solution methods.

Method

Studies

To examine the relationship between flexibility and other measures, we searched for studies that included measures of flexibility, conceptual, and procedural knowledge and standardized test data in the domain of linear equation solving. This resulted in the inclusion of four studies (Rittle-Johnson & Star, 2009; Rittle-Johnson et al., 2009, in press; Schneider, Rittle-Johnson & Star, in press) with 162, 236, 198, and 293 participants respectively. In all studies, we measured flexible knowledge at pretest and both flexible knowledge and flexible use after a 2 to 3 day intervention focused on supporting flexibility. Individual student standardized test scores were also collected from school records, and these tests varied in whether they were norm referenced national tests or criterion referenced state tests.

Measures

The included studies measured flexibility use and knowledge, conceptual knowledge, and procedural knowledge, along with standardized test scores (see Table 1). As mentioned above, we defined flexibility knowledge as knowing multiple strategies and their relative efficiencies and defined flexible use as adapting strategy choice to specific problem features. In the included studies, conceptual knowledge was defined as “an integrated and functional grasp of mathematical ideas” (Kilpatrick et al., 2001, p. 118), and the conceptual knowledge measures assessed the ability to recognize and explain key domain concepts (Carpenter et al., 1998; Hiebert & Wearne, 1996). Procedural knowledge was defined as the ability to execute action sequences to solve problems (Hiebert & Wearne, 1996; Rittle-Johnson, Siegler, & Alibali, 2001). The procedural knowledge measures included familiar and unfamiliar problems.

Different mathematics standardized tests were used in different studies depending on what was standard practice for each school. The standardized test measures included the mathematics sections of the following standardized tests: the Comprehensive Testing Program (CTP) and Measures of Academic Progress (MAP) norm referenced national tests, and the Michigan Educational Assessment Program (MEAP) and Tennessee Comprehensive Assessment Program (TCAP) criterion referenced state tests.

Coding of Studies and Analysis Strategies

First, the correlations in each study between flexible use and flexibility knowledge, and how both measures of flexibility as well as conceptual and procedural knowledge correlated with standardized test math scores were calculated. Because these were correlations between two continuous variables, we used Fischer’s Z to transform correlations to get effect sizes for each correlation (Lipsey & Wilson, 2001). Effect sizes were calculated using data from posttest measures. The mean effect size for each type of correlation (e.g., the correlation between flexibility knowledge and flexible use) was calculated using meta-analytic techniques and a random effects model. A random effects model was chosen to calculate the mean effect size because it was desirable for these results to be generalizable, and it did not seem likely that the calculated value would be the one true population effect size.

Results and Discussion

Relationship between Flexibility Measures

Flexibility knowledge and flexible use measures were strongly related to each other (ES = 0.660, p < .001). Thus, as expected, flexibility knowledge and flexible use have a moderately strong relationship, although they are not perfectly correlated with one another. This indicates that it is important to measure flexible use and flexibility knowledge separately. While both measures are assessing students’ procedural flexibility, they do so in different ways and appear to be tapping somewhat different skills. To get a complete picture of students’ overall flexibility, both their knowledge of multiple procedures and their ability to flexibly choose and implement the most efficient procedure should be evaluated.

Relationship between Standardized Tests and Other Outcomes

Students’ standardized test scores significantly related to flexibility knowledge and flexible use (see Table 2). However, flexibility knowledge had a slightly stronger relationship to standardized test scores than the relationship between flexible use and standardized test scores. In addition, standardized test scores also had moderate correlations with conceptual and procedural knowledge. Consequently, it appears that standardized test scores relate to flexibility knowledge just as well as they relate to conceptual and procedural knowledge.

We also wanted to examine if these relationships with standardized tests would differ if we looked at students’ performance on norm referenced national tests versus performance on criterion referenced state tests (see Table 2). After running models with standardized tests categorized as norm or criterion referenced, there does appear to be a difference between test types. Norm referenced national tests had stronger relationships with all other outcome measures, including flexibility, than the criterion referenced state tests, although all relationships were statistically significant. The norm referenced national assessments seem to have a stronger relationship to flexibility knowledge and flexible use measures, although neither type of test typically includes any measures of flexibility. This may be the result of state standardized tests being based off of specific state curriculum focal points for the state population, while national standardized tests may measure a broader array of skills for a wider population. The creators of standardized tests, particularly the creators of criterion referenced state tests, may want to consider adding more items that directly assess flexibility to better understand students’ knowledge of mathematics.

Overall, these results indicate that it is important to measure both students’ knowledge of multiple procedures and their ability to flexibly choose efficient procedures, and that such items should possibly be added to standardized tests. With teachers often feeling pressured to teach to the test, the lack of flexibility items on current assessments could lead to teachers spending less instructional time on getting their students to become flexible problem solvers. As a consequence, their students may not develop the important skill of procedural flexibility. By creating valid standardized measures of flexibility, it may increase the emphasis on flexibility in classrooms and help teachers better assess their students’ flexibility.


References

Blöte, A. W., Van der Burg, E., & Klein, A. S. (2001). Students' flexibility in solving two-digit addition and subtraction problems: Instruction effects. Journal of Educational Psychology, 93(3), 627-638.

Carpenter, T. P., Franke, M. L., Jacobs, V. R., Fennema, E., & Empson, S. B. (1998). A longitudinal study of invention and understanding in children's multidigit addition and subtraction. Journal for Research in Mathematics Education, 29(1), 3-20.

Common Core State Standards for Mathematical Practice. (2010). Retrieved June 5, 2010, from http://www.corestandards.org/the-standards/mathematics/introduction/standards-for-mathematical-practice/

Dowker, A. (1992). Computational estimation strategies of professional mathematicians. Journal for Research in Mathematics Education, 23(1), 45-55.

Heirdsfield, A. M., & Cooper, T. J. (2002). Flexibility and inflexibility in accurate mental addition and subtraction: Two case studies. The Journal of Mathematical Behavior, 21, 57-74.

Hiebert, J., Carpenter, T. P., Fennema, E., Fuson, K. C., Human, P., Murray, H., et al. (1996). Problem Solving as a Basis for Reform in Curriculum and Instruction: The Case of Mathematics. Educational Researcher, 25(4), 12-21.

Hiebert, J., & Wearne, D. (1996). Instruction, Understanding, and Skill in Multidigit Addition and Subtraction. Cognition and Instruction, 14(3), 251-283.

Kieran, C. (1992). The learning and teaching of school algebra. In D. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 390-419). New York: Simon & Schuster.

Kilpatrick, J., Swafford, J. O., & Findell, B. (Eds.). (2001). Adding it up: Helping children learn mathematics. Washington DC: National Academy Press.

Lipsey, M. W., & Wilson, D. B. (2001). Practical Meta-Analysis (Vol. 49). Thousand Oaks, CA: Sage Publications.

National Mathematics Advisory Panel. (2008). Foundations of Success: The Final Report of the National Mathematics Advisory Panel. Washington, DC: U.S. Department of Education.

NCTM. (2006). Curriculum focal points for Prekindergarten through Grade 8 Mathematics. Reston, VA: National Council for Teachers of Mathematics.

Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346-362.

Rittle-Johnson, B., & Star, J. R. (2007). Does comparing solution methods facilitate conceptual and procedural knowledge? An experimental study on learning to solve equations. Journal of Educational Psychology, 99, 561-574.

Rittle-Johnson, B., & Star, J. R. (2009). Compared with what? The effects of different comparisons on conceptual knowledge and procedural flexibility for equation solving. Journal of Educational Psychology, 101, 529-544.

Rittle-Johnson, B., Star, J. R., & Durkin, K. (2009). The importance of prior knowledge when comparing examples: Influences on conceptual and procedural knowledge of equation solving. Journal of Educational Psychology, 101, 836-852.