Historical Background of Curriculum-Based Measurement

Curriculum-based measurement was originally created to evaluate the effectiveness of a special education model referred to as data-based program modification (DBPM; Deno & Mirkin, 1977). The theory behind the data-based program modification was that special education teachers could repeatedly administer probes to student, collect data to evaluate the effectiveness of their instruction, and make modifications to their instruction based on the results of their analysis of the data. These probes eventually evolved into what is now called curriculum-based measurement.

Deno and Fuchs (1987) reported that the probes were a generic set of progress monitoring procedures in reading, spelling and written expression. Those procedures include specifications of (1) the core outcome tasks on which performance should be measured; (2) the stimulus items, the measurement activities, and the scoring performance to produce technically adequate data; and (3) the decision rules used to improve educational programs. Eventually, a set of procedures or administration and scoring rules were identified in efforts to establish validity and reliability.

Validity and Reliability of Curriculum-Based Measurement

As with any instrument, it is important to ensure that validity and reliability has been established. It would greatly weaken the findings of this study if validity and reliability in curriculum-based measurement in reading, especially oral reading fluency, had not been established. Luckily, CBM has been studied since the late 1970’s and had an influx of researched being performed in the 1980’s. This section will discuss the established validity and reliability of curriculum-based measurement in reading.

Messick describes validity as “ an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales supports the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” (1989, p. 13). Curriculum-Based Measurement has been an area of research since the late 1970’s (Deno & Mirkin, 1977); however, Deno, Mirkin, and Chiang conducted the original CBM validity study in 1982. Deno, et al. (1982) suggested that listening to student read aloud from their basal reader for 1 minute was a valid measure of reading skills.

Curriculum-based measurements, including oral reading fluency, were correlated with generally accepted published norm-referenced criterion tests of reading. CBM was first correlated with Reading Comprehension subtest from the Peabody Individual Achievement Test (Dunn & Markwardt, 1970), the Woodcock Reading mastery Test (Woodcock, 1973), and the Stanford Diagnostic Reading Test (Karlsen, Madden, & Garner, 1975). Correlational coefficients ranged from .73 to .91, with most coefficients above .80.

Other studies have also been conducted that confirm these initial findings. Out of the total of twenty correlations performed between the students oral reading fluency and different published measures of reading, ten of the twenty correlations fell above .80 and ranged from .63 to .90 (Fuchs & Deno, 1981; Deno, Mirkin, & Chang, 1982;Fuchs, Fuchs & Maxwell, 1988; Marston, 1982; Marston & Deno, 1982; Fuchs, Tindal, Shinn et al., 1983; Fuchs, Tindal, Fuchs et al., 1983; Tindal, Shinn, Fuchs, Fuchs, Deno & German, 1983; Fuchs, Tidal & Deno, 1984; Marston, 1989).

The reliability of CBM, including oral reading fluency, has also been studied and has shown to be very reliable. ADD DEFINITION OF RELIABILITY. Shinn (1981), Marston (1982) and Tindal et al. (1983) concluded that test-retest reliability coefficients of CBM reading ranged from .82 to .97 with most estimates being above .90. Tindal et al (1983) studied parallel form reliability of CBM reading and reported that reliability confidences ranged from .84 to .96, with most correlations above .90. Tindal et al. (1983) also reported that interrater reliability agreement coefficients were at .99.

Theoretical Base for the Reading Skills Assessed by Curriculum Based Measurement

The importance of word reading fluency has been researched over the past 30 years (Deno, 1985; Deno, Fuchs, Marston, & Jongho, 2001; Shinn, 1989). The National Reading Panel (2001) reported that there are 5 main areas associated with reading, which include Phonological Awareness, Alphabetic Principal (Phonics), Fluency, Vocabulary, and Comprehension. The National Reading Panel (2001) refers to these 5 main areas as The 5 Big Ideas in Reading and describes these big ideas as a hierarchical progression from beginning to interpret the differences in sounds to comprehending what one reads. Good, Simmons, and Kame'enui (2001) used CBM measures, called the Dynamic Indicators of Basic Early Literacy Skills (DIBELS), to measure and benchmark students’ skills in areas of the 5 Big Ideas. Good, Kaminski, Simmons, and Kame'enui (2001) also conducted research that indicates decision rules for intensive, strategic, and benchmark instructional recommendations for kindergarten through third grade students. Oral reading fluency can be indexed as correct words read per minute due to the nature of its gradual development and can be used to satisfy the assessment design of a curriculum-based measure. This oral reading fluency curriculum-based measure is often referred to as ORF or R-CBM (Fuchs & Deno, 1992; Good, Simmons, & Kame'enui,2001). For the purpose of this study, ORF and R-CBM will mean the same thing and can be used interchangeably. An oral reading fluency curriculum-based measurement can also produce a normative framework that calls practitioners to compare performance levels between students and to track the gains or performance slopes within a student (Fuchs et al, 2001). Research has also indicated that oral reading fluency is related to future reading performance better than other reading tasks, such as number of word reading errors in context.

Although curriculum-based measurement was originally developed to make it possible for teachers to evaluate their instruction, other common uses for curriculum-based measurement have also been developed. Curriculum-based measurement has also been used for (1) improving individual instructional programming; (2) predicting performance on important criteria; (3) enhancing teacher instructional planning; (4) developing norms; (5) increasing ease of communication; (6) screening to identify students academically at risk; (7) evaluating classroom prereferral interventions; (8) reducing bias in assessment; (9) offering alternative special education procedures; (10) recommending and evaluating inclusion; (11) measuring growth in secondary school programs; (12) assessing English language learning students; (13) predicting success in early childhood education and (14) predicting performance on high stake state assessments ( Deno, 2003).

References

Amit, M. & Fried, M. N. (2002). High-stakes assessment as a tool for promoting mathematical literacy and the democratization of mathematics education. The Journal of Mathematical Behavior, 21, 499-514.

Anita, L. A. (2003). Decoding and fluency: Foundation skills for struggling older readers. Learning Disability Quarterly, 26, 89-101.

Baker, S. K. & Good, R. (1995). Curriculum-based measurement of English reading with bilingual Hispanic students: A validation study with second-grade students. School Psychology Review, 24, 561.

Baker, S. K., Plasencia-Peinado, J., & Lezcano-Lytle, V. (1998). The use of curriculum-based measurement with language-minority students. In M.R.Shinn (Ed.), Advanced applications of Curriculum-Based Measurement (pp. 175-213). New York, NY, US: Guilford Press.

Barbara, M. T. (2003). Reading growth in high-poverty classrooms: the influence of teacher practices that encourage cognitive engagement in literacy learning. The Elementary School Journal, 104, 3-28.

Bishop, A. G. (2003). Prediction of first-grade reading achievement: A comparison of fall winter kindergarten screenings. Learning Disability Quarterly, 26, 189-200.

Bishop, A. G. (2003). Prediction of first-grade reading achievement: A comparison of fall winter kindergarten screenings. Learning Disability Quarterly, 26, 189-200.

Blachman, B. A. (1988). The futile search for a theory of learning disabilities. Journal of Learning Disabilities, 21, 286-288.

Bush, G. W. (2002). Executive order 13255--amendment to executive order 13227, President's Commission of Excellence in Special Education. Weekly Compilation of Presidential Documents 38[6], 191. Superintendent of Documents.

Cecilia, A. S. a. C. (2002). A comparison of multiple methods for the identification of children with reading disabilities.(Statistical Data Included). Journal of Learning Disabilities, 35, 234.

Christ, T. J. & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130.

Christ, T. J. & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130.

Cortiella, C. & National Center on Educational Outcomes, M. MN. (2007). Learning Opportunities for Your Child Through Alternate Assessments: Alternate Assessments Based on Modified AcademicAchievementStandardsNationalCenter on Educational Outcomes, University of Minnesota.

Crawford, L., Tindal, G., & Stieber, S. (2001). Using oral reading rate to predict student performance on statewide achievement tests. Educational Assessment, 7, 303-323.

Daryl, F. M. (2004). Foundations and research on identifying model responsiveness-to-intervention sites. Learning Disability Quarterly, 27, 243-256.

Daryl, F. M. (2004). LD Identification: It’s not simply a matter of building a better mousetrap. Learning Disability Quarterly, 27, 229-242.

Debi, G. (2005). NJCLD Position Paper: Responsiveness to intervention and learning disabilities. Learning Disability Quarterly, 28, 249-260.

Defur, S. H. (2002). Education Reform, High-Stakes Assessment, and Students with Disabilities. Remedial & Special Education, 23, 203.

Deno, S. (1992). The nature and development of curriculum-based measurement. Preventing School Failure, 36, 5.

Deno, S. L. & Fuchs, L. S. (1987). Developing curriculum-based measurement systems for data-based special education problem solving. Focus on Exceptional Children, 19, 1.

Deno, S. L. (2003). Developments in curriculum-based measurement. Journal of Special Education, 37, 184-192.

Deno, S. L.& Mirkin, P. K. (1977). Data-based program modification: A manual. Minneapolis: Leadership Training Institute/Special Education, University of Minnesota

Deno, S. L., Fuchs, L. S., Marston, D., & Jongho, S. (2001). Using curriculum-based measurements to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507.

Deno, S. L., Marston, D., & Tindal, G. (1985). Direct and frequent curriculum-based measurement: An alternative for educational decision making. Special Services in the Schools, 2, 5-27.

Deno, S. L., Marston, D., & Tindal, G. (1985). Direct and frequent curriculum-based measurement: An alternative for educational decision making. Special Services in the Schools, 2, 5-27.

Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36-59.

Douglas, F. (2004). National research center on learning disabilities: Multimethod studies of identification and classification issues. Learning Disability Quarterly, 27, 189-195.

Dunn, L., & Markwardt, F. (1970). Peabody individual achievement test. Circle Pines, MN: American Guidance Services.

Ervin, R. A. (2007). Primary and secondary prevention of behavior difficulties: Developing a data-informed problem-solving model to guide decision making at a school-wide level. Psychology in the Schools, 44, 7-18.

Espin, C. A. & Tindal, G. (1998). Curriculum-based measurement for secondary students. In M.R.Shinn (Ed.), Advanced applications of Curriculum-Based Measurement (pp. 214-253). New York, NY, US: Guilford Press.

Espin, C. A., Scierka, B. J., Skare, S., & Halverson, N. (1999). Criterion-related validity of curriculum-based measures in writing for secondary school students. Reading & Writing Quarterly, 15, 5-27.

Fletcher, J. M. (1994). Cognitive profiles of reading disability: Comparisons of discrepancy and low achievement profiles. Journal of Educational Psychology, 86, 6.

Foegen, A., Jiban, C. & Deno, S. (2007). Progress monitoring measures in mathematics: a review of the literature. Journal of Special Education, 41, 121.

Foorman, B. R. & Nixon, S. M. (2006). The influence of public policy on reading research and practice. Topics in Language Disorders, 26, 157.

Foorman, B. R., Francis, D. J., Fletcher, J. M., & Lynn, A. (1996). Relation of phonological and orthographic processing to early reading: Comparing two approaches to regression-based, reading-level-match designs. Journal of Educational Psychology, 88, 639-652.

Fuchs, D. & Fuchs, L. S. (2004). Identifying reading disabilities by responsiveness-to-instruction: specifying measures and criteria. Learning Disability Quarterly, 27, 216-227.

Fuchs, D. & Fuchs, L. S. (2006). Introduction to response to intervention: What, why, and how valid is it? Reading Research Quarterly, 41, 93-99.

Fuchs, D., Fuchs, L. S., Bahr, M. W., Fernstrom, P., & Stecker, P. M. (1990). Prereferral intervention: A prescriptive approach. Exceptional Children, 56, 493-513.

Fuchs, D., Fuchs, L. S., McMaster, K. L., Yen, L., & Svenson, E. (2004). Nonresponders: How to find them? How to help them? What do they mean for special education? Teaching Exceptional Children, 37, 72-77.

Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice (Blackwell Publishing Limited), 18, 157-171.

Fuchs, D., Roberts, P. H., Fuchs, L. S., & Bowers, J. (1996). Reintegrating students with learning disabilities into the mainstream: A two-year study. Learning Disabilities Research & Practice, 11, 214-229.

Fuchs, L. S. & Deno, S. L. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 58, 232.

Fuchs, L. S. & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15.

Fuchs, L. S. & Deno, S. L. (1994). Must instructionally useful performance assessment be based in the curriculum? Exceptional Children, 61, 15.

Fuchs, L. S. & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28, 659.

Fuchs, L. S. & Fuchs, D. (2001). Computer applications to curriculum-based measurement. Special Services in the Schools, 17, 1.

Fuchs, L. S. (1988). Effects of computer-managed instruction on teachers' implementation of systematic monitoring programs and student achievement. Journal of Educational Research, 81.

Fuchs, L. S. (1992). Computer applications to facilitate curriculum-based measurement. Teaching Exceptional Children, 24, 58.

Fuchs, L. S. (2003). Assessing intervention responsiveness: Conceptual and technical issues. Learning Disabilities Research & Practice (Blackwell Publishing Limited), 18, 172-186.

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449-460.

Fuchs, L. S., Deno, S. L., & Mirkin, P. K. (1984). The effects of frequent curriculum-based measurement and evaluation on pedagogy, student achievement, and student awareness of learning. American Educational Research Journal, 21, 449-460.

Fuchs, L. S., Fuchs, D., & Deno, S. L. (1985). Importance of goal ambitiousness and goal mastery to student achievement. Exceptional Children, 52, 63-71.

Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. RASE: Remedial & Special Education, 9, 20-28.

Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Stecker, P. M. (1991). Effects of curriculum-based measurement and consultation on teacher planning and student achievement in mathematics operations. American Educational Research Journal, 28, 617-641.

Fuchs, L. S., Fuchs, D., Hosp, M. K., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239-256.

Fuchs, L. S., Tindal, G., Fuchs, D., Shinn, M. R., Deno, S. L., & Germann, G. (1983). Technical adequacy of basal readers’ mastery test: Holt basic series (Research Report No. 130). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities.

Fuchs, L. S., Tindal, G., Shinn, M. R., Fuchs, D., Deno, S. L., & Germann, G. (1983). Technical adequacy of basal readers’ mastery test: Ginn 720 series (Research Report No. 122). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities.

Fuchs, L., Deno, S., & MinnesotaUniv., M. (1981). The relationship between curriculum-based mastery measures and standardized achievement tests in reading. (ERIC Document Reproduction Service No. ED212662) Retrieved March 25, 2008 from ERIC database.

Good, R. H. I. & Jefferson, G. (1998). Contemporary perspectives on curriculum-based measurement validity. In M.R.Shinn (Ed.), Advanced Applications of Curriculum-Based Measurement (pp. 61-88). New York, NY, US: Guilford Press.

Good, R. H. I., Simmons, D. C., & Kame'enui, E. J. (2001). The importance and decision-making utility of a continuum of fluency-based indicators of foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257-288.

Good, R. H., III, Kaminski, R. A., Simmons, D., Kame'enui, E. J., & OregonSchool (2001). Using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an Outcomes-Driven Model: Steps to Reading Outcomes (Rep. No. 44). OSSC Bulletin.

Goodman, G. & Webb, M. A. (2006). Reading disability referrals: Teacher Bias and other factors that impact response to intervention. Learning Disabilities -- A Contemporary Journal, 4, 59-70.

Gregory, K. & Clarke, M. (2003). High-stakes assessment in England and Singapore. Theory Into Practice, 42, 66.

Grimes, J., Kurns, S., & Tilly III, W. D. (2006). Sustainability: an enduring commitment to success. School Psychology Review 35 (2), 224-244.

Haager, D. & Windmueller, M. P. (2001). Early reading intervention for English language learners at-risk for learning disabilities: student and teacher outcomes in an urban school. Learning Disability Quarterly, 24, 235.

Hasbrouck, J. E., Woldbeck, T., Ihnot, C., & Parker, R. I. (1999). One teacher's use of curriculum-based measurement: A changed opinion. Learning Disabilities Research & Practice (Lawrence Erlbaum), 14, 118.

Hintze, J. M. & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of r-cbm and high-stakes testing. School Psychology Review, 34, 372-386.

Hintze, J. M. & Silberglitt, B. (2005). A longitudinal examination of the diagnostic accuracy and predictive validity of R-CBM and high-stakes testing. School Psychology Review, 34, 372-386.

Hosp, J. L. & Reschly, D. J. (2003). Referral rates for intervention or assessment: A meta-analysis of racial differences. Journal of Special Education, 37, 67.

Howell, K. W. & Nolet, V. (1999). Curriculum-Based Evaluation: Teaching and Decision Making. Belmont, Ca.: Wadsworth Publishing

Huynh, H, Meyer, J. P, & Barton, K. E. (2000). Technical Documentation for the 1999 Palmetto Achievement Challenge Test of English language arts and mathematics, grades three through eight. Retrieved May 5, 2008 from

Huynh, H., Barton, K. E., Meyer, J. P., Porchea, S., & Gallant, D. (2005). Consistency and predictive nature of vertically moderated standards for South Carolina’s 1999 Palmetto Achievement Challenge Tests of language arts and mathematics. Applied Measurement in Education, 18, 115.

Isaac, S. & Micheal, W. B. (1997). Handbook in research and evaluation: A collection of principles, methods, and strategies useful in the planning, design, and evaluation of studies in education and the behavioral sciences (3rd ed). San Diego, Ca.: EdITS Publishers.

Jenkins, J. R., Vadasy, P. F., Firebaugh, M., & Profilet, C. (2000). Tutoring first-grade struggling readers in phonological reading skills. Learning Disabilities Research & Practice (Lawrence Erlbaum), 15, 75-84.

Jim, Y. (2005). Assessment and decision making for students with learning disabilities: What if this is as good as it gets? Learning Disability Quarterly, 28, 125-128.

John, W. L. (2005). Going forward: How the field of learning disabilities has and will contribute to education. Learning Disability Quarterly, 28, 133-136.

Johnson, E., Kimball, K., Brown, S. O., & Anderson, D. (2001). A statewide review of the use of accommodations in large-scale, high stakes assessments. Exceptional Children, 67, 251-264.

Karlsen, B., Madden, R. & Gardner, E. (1975). Stanford Diagnostic Reading Test (2nd ed.). San Antonio, TX: The Psychological Corporation.

Kim, J., Sunderman, G. L., & Harvard Civil Rights Project, C. MA. (2004). Large Mandates and Limited Resources: State Response to the "No Child Left Behind Act" and Implications for Accountability Civil Rights Project at HarvardUniversity (The).

Marston, D. B. (1982). The technical adequacy of direct, repeated measurement of academic skills in low achieving elementary students.Ph.D. dissertation, University of Minnesota, United States -- Minnesota. Retrieved May 4, 2008, from Dissertations & Theses: Full Text database. (Publication No. AAT 8301966).

Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M.R.Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18-78). New York, NY, US: Guilford Press.

Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it. In M.R.Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18-78). New York, NY, US: Guilford Press.

Marston, D., Deno, S., & MinnesotaUniv., M. (1982). Implementation of direct and repeated measurement in the school setting. (ERIC Document Reproduction Service No. ED226048) Retrieved March 25, 2008, from ERIC database.

Marston, D., Pickart, M., Reschly, A., Heistad, D., Muyskens, P., & Tindal, G. (2007). Early literacy measures for improving student reading achievement: translating research into practice. Exceptionality, 15, 97-117.

Mathes, P. G., Denton, C. A., Fletcher, J. M., Anthony, J. L., Francis, D. J., & Schatschneider, C. (2005). The effects of theoretically different instruction and student characteristics on the skills of struggling readers. Reading Research Quarterly, 40, 148-182.