Appendix B NAEP 2002 Oral Reading Study Background of the Study The Oral Reading Study (ORS), conducted as a part of the 2002 NAEP reading assessment, replicated procedures used in the 1992 Integrated Reading Performance Record (IRPR) to evaluate fourth graders' oral reading. Although the 1992 IRPR was broader in scope, including an extensive one-on-one interview with fourth graders regarding their instructional and recreational reading experiences, the 2002 ORS focused solely on the oral reading measure that was part of the 1992 IRPR.Analysis of oral reading is used frequently in classrooms to supplement other measures of reading development. Observing the oral reading of students can reveal elements of their reading ability that are not necessarily captured in paper-and-pencil assessments of reading comprehension. Teachers who observe students reading grade-appropriate material fluently, accurately, and with adequate speed are reassured of their students' developing skills. The insights gained through these observations, in combination with students' performance on reading comprehension tests, create a more complete profile of students' overall reading development. The oral reading component of the 1992 IRPR was the first such assessment to be administered on a large-scale basis to a nationally representative sample of students. It supplemented the results of the main reading assessment, providing additional information about fourth graders' oral reading ability and its relation to reading comprehension. Among the major findings from the oral reading component of the 1992 IRPR were the following:
Overview of the 2002 Oral Reading Study The ORS conducted in 2002 was administered to a subsample of fourth graders who participated in the main assessment. As such, results from the ORS can be linked to student performance on the main assessment and to student, teacher, and principal responses to background questionnaires. As in 1992, the special study was conducted only in fourth grade because this is the grade among the three assessed by NAEP that focuses most on reading development and instruction. Students selected for the ORS were asked to work with a trained administrator at a time separate from the administration of the assessment. During this one-on-one session, the student was asked to read aloud a portion of one of the two passages the student had read for the main assessment. The taped oral reading was analyzed subsequently by trained raters. The analysis and reporting of results from this special study will focus on the unique aspects of reading development that can be learned from observing students reading aloud and will capitalize on the links with the main assessment—including students’ performance on the paper-and-pencil comprehension assessment and the contextual information gathered through the student, teacher, and school questionnaires. More detailed information about the study is provided below.Sample A subsample of approximately 2,000 fourth graders who participated in the main assessment was drawn. This sample was selected to be representative of the national population of fourth graders and included sufficient subgroup representation to allow the reporting of ORS results by gender and by the three largest racial/ethic subgroups (White, Black, and Hispanic).Materials A single passage included in the 2002 fourth-grade reading assessment was selected to be used in the ORS. The passage was selected based on its comparability to the passages used for the oral reading component of the 1992 IRPR. Although reporting trends (1992 to 2002) in oral reading ability was not technically feasible because of changes in administration procedures, choosing the most comparable passage from among the 2002 passages may make it possible to compare results from the 2 years.Administration Students selected for the study worked with a trained Westat administrator during a one-on-one session scheduled within a month of the students’ participation in the main assessment. The entire session was recorded on laptop computers provided by the administrator. The laptop computers guided administration procedures by displaying scripted directions for administrators to read and by playing previously recorded instructions for students. Students listened to these prerecorded instructions through headsets that included a microphone for recording purposes. The scripted directions and recorded directions resulted in highly standardized procedures for collecting data. An oral reading passage at approximately the second-grade reading level was used to screen students for whom reading the passage from the assessment would have been very difficult. (In 1992, only 2 percent of students were excluded based on the screening passage.) After the initial screening passage, students were asked to reread the passage from the assessment silently. Students were then asked to discuss the passage by answering three constructed-response comprehension questions that had been used in the main assessment. Immediately following the comprehension questions, students were asked to read a section of the passage aloud (approximately 200 words). The oral reading was timed using software on the laptop.Scoring Two teams of raters were formed: (1) fluency raters and (2) accuracy coders. Each audiotaped session was evaluated by both teams. The fluency rating involved listening to a student’s oral reading twice and assigning a fluency rating from 1 to 4 based on the fluency scale developed for the 1992 IRPR. This scale considers several elements of oral reading fluency including phrasing, adherence to author’s syntax, and expressiveness. The accuracy coding involved carefully noting every error made by students. The types of errors documented were omissions, substitutions, and insertions. In addition, each error was judged for whether it resulted in a change of meaning to the text and whether the student accurately self-corrected the error. For both the fluency rating and accuracy coding, at least 25 percent of student recordings were scored twice to evaluate the reliability of the scoring procedures.Analysis and Reporting Percentages of students within each fluency rating category will be reported. The relationship between fluency and comprehension (NAEP scale scores) and between fluency and relevant instructional experiences (background questionnaire responses) will be analyzed and reported. Accuracy will be analyzed in terms of the overall percentage of oral reading errors as well as the type and nature of errors. The relationship among types of oral reading errors and comprehension, and between overall accuracy and instructional experiences, will be analyzed and reported. The average reading rate and distribution of reading rates will also be reported. Finally, relationships among fluency, accuracy, rate, and comprehension will be analyzed and reported. The 2002 Oral Reading Study report is scheduled for release in late 2004.Additional Coding of Types of Errors The ORS was enhanced to further address issues related to phonemic awareness by adding a word-level decoding dimension to the oral reading accuracy coding process. The accuracy coding considers only the following three elements in recording and analyzing errors made by students:
One important aspect of oral reading errors that is not accounted for in this coding scheme is the degree to which an error made by a student approximates the structure of the word in the text. Referred to in the reading literature as "grapho-phonemic" strategies, approximating the graphic or phonemic structure of a word is an important type of decoding strategy that was not captured in the 1992 coding scheme. By including this type of analysis in the 2002 coding procedures, NAEP can examine students’ reliance on word-level cues in decoding unfamiliar words. This adds a dimension to the error coding that more directly examines students’ word-reading strategies (that is, reliance on phonetic cues). One issue that has engaged the reading community in recent years is whether reliance on context clues (the text surrounding a word) or reliance on word-level cues (individual word construction) is more indicative of skilled reading. The original accuracy coding scheme used in the 1992 IRPR addressed only reliance on context clues. Noting whether an error (specifically, a word substitution) changed the meaning of the text gave some indication of students’ reliance on context clues. However, the coding scheme did not include any analysis of students’ reliance on word-level cues. An example of how adding this dimension to the coding process may enhance findings from the ORS follows.
By adding this dimension to the coding scheme, the 2002 ORS provides a richer database and analysis of students’ reading development. For example, by adding this dimension to the coding process, the 2002 ORS can examine how context clues and word-level decoding strategies are related to overall reading proficiency.
|