Hostname: page-component-76fb5796d-zzh7m Total loading time: 0 Render date: 2024-04-27T04:23:46.477Z Has data issue: false hasContentIssue false

The Development of a Passage Reading Test for the Frequent Monitoring of Performance of Low-Progress Readers

Published online by Cambridge University Press:  26 February 2016

Kevin Wheldall*
Affiliation:
Macquarie University
Alison Madelaine
Affiliation:
Macquarie University
*
Professor Kevin Whedall, Macquarie University Special Education Centre, Macquarie University, Sydney, NSW, 2109, Australia. Email; kevin.whedall@mq.edu.au

Abstract

The aim of this study was to develop a means of tracking the reading performance of low-progress readers on a weekly basis, so as to inform instructional decision-making. A representative sample of 261 primary school children from Years 1 to 5 were tested on 21 different text passages taken from a developing passage reading test, the Wheldall Assessment of Reading Passages (WARP). The results from this study were used to identify a series of ten ‘progress passages’ that were of very similar difficulty level to a set of three parallel ‘basal’ WARP passages. High parallel form reliability levels were demonstrated among all the passages. It is argued that the ten parallel progress passages may be employed with the original three basal passages, first to establish the current reading performance level of low-progress readers and then to track their progress by administering each of the ten progress passages weekly in turn over a ten week school term.

Type
Research Article
Copyright
Copyright © The Australian Association of Special Education 2006

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arthaud, T.J., Vasa, S.F., & Steckelberg, A.L. (2000). Reading assessment and instructional practices in special education. Diagnostique, 25, 205227.CrossRefGoogle Scholar
Bain, S.K., & Garlock, J.W. (1992). Cross-validation of criterion-related validity for CBM reading passages. Diagnostique, 17, 202208.Google Scholar
Deno, S.L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219232.Google Scholar
Deno, S.L. (2003a). Curriculum-based measures: Development and perspectives. Assessment for Effective Intervention, 28, 311.CrossRefGoogle Scholar
Deno, S.L. (2003b). Developments in curriculum-based measurement. Journal of Special Education, 37, 184192.CrossRefGoogle Scholar
Deno, S.L., Fuchs, L.S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to establish growth standards for students with learning disabilities. School Psychology Review, 30, 507524.CrossRefGoogle Scholar
Deno, S.L., Marston, D., & Mirkin, P. (1982). Valid measurement procedures for continuous evaluation of written expression. Exceptional Children, 368371 CrossRefGoogle Scholar
Deno, S.L., Mirkin, P.K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 3645.Google Scholar
Eckert, T.L., Shapiro, E.S., & Lutz, J.Q. (1995). Teachers’ ratings of the acceptability of curriculum-based assessment methods. School Psychology Review, 24, 497511.CrossRefGoogle Scholar
Fuchs, L.S. (1989). Evaluating solutions: Monitoring progress and revising intervention plans, in Shinn, M.R. (Ed.), Curriculum-based measurement: Assessing special children (pp. 153181). New York: The Guilford Press.Google Scholar
Fuchs, L.S., & Deno, S.L. (1991). Paradigmatic distinctions between instructionaliy relevant measurement models. Exceptional Children, 57, 488500.CrossRefGoogle Scholar
Fuchs, L.S., & Deno, S.L. (1994). Must instructionaliy useful performance assessment be based in the curriculum? Exceptional Children, 61, 1524.Google Scholar
Fuchs, L.S., Fuchs, D., Hamlett, C.L., Walz, L., & Germann, G. (1993). Formative evaluation of academic progress: How much growth can we expect? School Psychology Review, 22, 2748.Google Scholar
Fuchs, L.S., Fuchs, D., Hosp, M.K., & Jenkins, J.J. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical and historical analysis. Scientific Studies of Reading, 5, 239256.Google Scholar
Fuchs, L.S., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9, 2028.Google Scholar
Glor-Scheib, S., & Zigmond, N. (1993). Exploring the potential motivational properties of curriculum-based measurement in reading among middle school students with learning disabilities. Learning Disabilities: A Multidisciplinary Journal, 4, 3543.Google Scholar
Good, R.H., Simmons, D.C., & Kame’enui, E.J. (2001). The importance and decision-making utility of a continuum of fluency-based indicators or foundational reading skills for third-grade high-stakes outcomes. Scientific Studies of Reading, 5, 257288.CrossRefGoogle Scholar
Goyen, J.D. (1977). Adult illiteracy in Sydney. Canberra: Australian Association of Adult Education.Google Scholar
Graff, H. (1995). The labyrinths of literacy: Reflections on literacy past and present. Pittsburgh: University of Pittsburgh.CrossRefGoogle Scholar
Hasbrouk, J.E., & Tindal, G. (1992). Curriculum-based oral reading fluency norms for students in grades 2 through 5. Teaching Exceptional Children, Spring, 4144 Google Scholar
Jenkins, J.R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and maze. Exceptional Children, 59, 421432 Google Scholar
Kame’enul, E.J., & Simmons, D.C. (2001). Introduction to this special issue: The DNA of reading fluency. Scientific Studies of Reading, 5, 203210.Google Scholar
Luke, A., & Gilbert, P. (1993). Literacy in contexts: Australian perspectives and issues. Australia: Allen & Unwin.Google Scholar
Madelaine, A., & Wheldall, K. (1998). Towards a curriculum-based passage reading test for monitoring the performance of low-progress readers using standardised passages: A validity study. Educational Psychology, 18, 471478.CrossRefGoogle Scholar
Madelaine, A. & Wheldall, K. (1999). Curriculum-based measurement of reading: A critical review. International Journal of Disability, Development and Education, 46, 7185.CrossRefGoogle Scholar
Madelaine, A., & Wheldall, K. (2002a). Establishing tentative norms and identifying gender differences in performance for a new passage reading test. Australian Journal of Learning Disabilities, 7, 4045.Google Scholar
Madelaine, A., & Wheldall, K. (2002b). Further progress towards a standardised curriculum-based measure of reading: Calibrating a new passage reading test against the New South Wales Basic Skills Test. Educational Psychology, 22, 461471.Google Scholar
Madelaine, A., & Wheldall, K. (2004). Curriculum-based measurement of reading: Recent Advances. International Journal of Disability, Development and Education, 51, 5782.Google Scholar
Marston, D., Diment, K., Allen, D., & Allen, L. (1992). Monitoring pupil progress in reading. Preventing School Failure 36(2), 2125.CrossRefGoogle Scholar
Marston, D.B. (1989). A curriculum-based measurement approach to assessing academic performance: What is it and why do it? In Shinn, M.R. (Ed.), Curriculum-based measurement: Assessing special children (pp. 1878). New York: The Guilford Press.Google Scholar
Mehrens, W.A., & Clarizio, H.F. (1993). Curriculum-based measurement: Conceptual and psychometric considerations. Psychology in the Schools, 30, 241254 Google Scholar
Nation, K., & Snowling, M. (1997). Assessing reading difficulties: The validity and utility of current measures of reading skills. British Journal of Educational Psychology, 67, 359370.CrossRefGoogle Scholar
Neale, M.D. (1988). Neale analysis of reading ability-revised. Melbourne: Australian Council of Educational Research.Google Scholar
New South Wales Department of Education and Training. (2000). Data on disk: Basic Skills Tests 2000. Sydney: New South Wales Department of Education and Training.Google Scholar
New Zealand Council for Educational Research. (1981). Burt word reading test: New Zealand revision. Wellington, NZ: Lithoprint (NZ) Ltd.Google Scholar
Nolet, V., & McLaughlin, M. (1997). Using CBM to explore a consequential basis for the validity of a state-wide performance assessment. Diagnostique, 22, 147163.CrossRefGoogle Scholar
Powell-Smith, K.A., & Bradley-Klug, K.L. (2001). Another look at the “C” in CBM: Does it really matter if curriculum-based measurement reading probes are curriculum-based? Psychology in the Schools, 38, 299312.CrossRefGoogle Scholar
Stage, S.A. (2001). Program evaluation using hierarchical linear modelling with curriculum-based measurement reading probes. School Psychology Quarterly, 16, 91112.CrossRefGoogle Scholar
Wheldall, K. (1994). Why do contemporary special educators favour a non-categorical approach to teaching? Special Education Perspectives, 3, 4547.Google Scholar
Wheldall, K. (1996). The Wheldall assessment of reading passages (WARP): Experimental edition. Macquarie University Special Education Centre: Unpublished Manuscript.Google Scholar
Wheldall, K., & Beaman, R. (2000). An evaluation of MULTILIT: Making up lost time in literacy. Canberra: Commonwealth Department of Education, Training and Youth Affairs.Google Scholar
Wheldall, K., & Carter, M. (1996). Reconstructing behaviour analysis in education: A revised behavioural interactionist perspective for special education. Educational Psychology, 16, 121140.Google Scholar
Wheldall, K., & Madelaine, A. (1997). Should we measure reading progress and if so how? Extrapolating the curriculum-based measurement model for monitoring low-progress readers. Special Education Perspectives, 6, 2935.Google Scholar
Wheldall, K., & Madelaine, A. (2000). A curriculum-based passage reading test for monitoring the performance of low-progress readers: The development of the WARP. International Journal of Disability, Development and Education, 47, 371382.CrossRefGoogle Scholar