Skip Maine state header navigation
Skip First Level Navigation | Skip All Navigation
|Home | Contact Us | Calendar | Archives | Education Intranet|
Reading First Information
Home > DIBELS Questions and Answers
Maine Reading First
DIBELS Questions and Answers
The following questions were generated by the 2004-05 Reading First Schools, and were answered by Janet Spector from the University of Maine at Orono, who is a professor in Education and Human Development, and the assessment consultant to Maine Reading First.
For further information, you will find Frequently Asked Questions on the Florida Center for Reading Research website: www.fcrr.org
Are special education students who are not tested with DIBELS entered on the data base?
If it is the PET's decision not to test the student, then the student would not be included in the data base. Teachers could, however, keep track of the number of children who are excluded and the reason for that exclusion, for their own information and in case any questions arise later on the state or federal level.
No questions asked during 2004-05.
No questions asked during 2004-05.
How do teachers handle students whose IEPs call for extended time for testing?
Extended time is not an approved DIBELS accommodation. All tests should be administered using the time limits prescribed in the manual.
How do teachers handle non-verbal students?
Teachers may try to test students who are non-verbal, but most DIBELS tests will not be useful if the students are non-verbal. Teachers of students who are non-verbal may need to identify alternate assessments that use the students' preferred mode of communication. For example, for vocabulary, teachers could use the Peabody Picture Vocabulary Test rather than WUF because no verbal response is required. The PPVT score would not be entered into the DIBELS data base. Similarly, Letter Naming Fluency could be turned into a recognition task (point to the "a," etc.). Note, though, that if the procedures for LNF are changed, the student's score should not be entered into the DIBELS data base. It could, however, be used for purposes of planning instruction. Teachers could also consult with the special education teacher, or have the special education teacher do the testing if the lack of verbal response is due to noncompliance or shyness.
How do teachers handle students who are not performing at grade level? Should these students do their grade level tasks, or the level at which they are performing? Should the same level be given for benchmarks and progress monitoring? For example, how do teachers handle a second-grader performing at a first-grade level?
Teachers should test the student first using the second-grade benchmark (unless students have IEPs that specify the need for alternate assessment). After testing at grade level, teachers could use the progress monitoring material at a lower grade level to test students.
NOTE: Teachers may not enter students' data for below-grade level benchmarks, but may enter it in the progress monitoring section.
How do teachers handle testing special education students who are performing below grade level?
For students with IEPs, the PET members should decide whether testing using DIBELS is appropriate.
Individual DIBELS Measures
Letter Naming Fluency
The "fancy g" as the first letter really confuses some students. How do teachers handle this?
If this is an issue for students, then teachers can replace the page from the benchmark booklet with one from the progress monitoring booklet. However, teachers should always use the standard materials whenever possible. A difference in a score of one or two points does not usually make a big difference in the judgment a teacher would make about a student's proficiency. If students get stuck on an item (for whatever reason), be sure to prompt them after three seconds to move to the next item.
Initial Sound Fluency
Why are blends used (e.g., /fl/ as in flowers)?
Originally, this test was called "Onset Recognition Fluency," which is a more accurate name. The term "onset" refers to the segment of the word that precedes the vowel. Younger students may have difficulty segmenting individual phonemes (sounds) within the onset. Since this is a kindergarten test, students get credit for giving either /f/ or /fl/ as the initial sound.
Why does this task ask for the initial sounds for "flower", and the initial sound for "plate," even though "plate" begins with a blend the same as "flower?"
It would be best if teachers ask for the first sound, and not sounds, because the word sounds only occurs with the word "flower." And, with the word "flower," the manual is inconsistent. On p. 24 of the test record booklet, the teacher is prompted to ask "which one begins with the sounds /fl/," but on the Assessment Integrity Checklist, the wording is "which one begins with the sound /fl/?" So it is best to ask for the sound, and accept either the first letter-sound OR the blend as a correct answer.
Phoneme Segmentation Fluency
Why is "Sam" used as an example, since it is a "glued or welded" sound, and so not a clear short "a" sound?
Keep in mind that this is an aural test, not a reading or writing test. There is no expectation that students even know the letters that go with the sounds. The vowel in "Sam" can be segmented just like any other vowel sound. The notion of "glued or welded" is an issue when teaching students to decode the vowel sound in the written word because the consonant that follows the vowel changes its sound. On the test, students do not need to give the short vowel sound of "a" as in "cat." Teachers should pronounce the medial vowel as it is pronounced in "Sam," not as it is in "cat."
Nonsense Word Fluency
If the directions state that the words on this task will be nonsense words, why does it include the words "kis" and "mum," which sound like real words when the sounds are blended?
The NWF measure was created using nonsense words because the test developers knew that beginning readers can memorize words (e.g., sight words). By using nonsense words, teachers can get a better idea of whether students are making the link between sound and print.
Students might blend the sounds for "kis" and remark that it is a real word. If that happens, the administrator of the test should say, "Yes, next word" and not engage in any further discussion. The idea is to keep the student focused on the task since it is timed. The developers of DIBELS have looked at this issue and found that it is not a problem.
Why do the first two items on NWF start with "y" and "w," which are two very difficult and easily confused sounds for kindergartners?
This is an unfortunate choice and we have no inside information regarding the rationale. That said, getting one sound wrong will not have a major effect on a student's score. Teachers should be sure not to let students struggle with this item and should supply the correct sound if students hesitate for three seconds. If a teacher feels that a student's performance is not representative of his/her skills, one of the progress monitoring probes can be administered at another time to check on the reliability of the score.
Does the inclusion of this assessment task mean that teachers are expected to provide instruction in nonsense word decoding?
No. This task looks at how well students know their sounds and can blend them. The test makes no assumptions about what teachers have or have not taught. It's plausible that most teachers challenge students to decode words that they have never seen before, some using real words that occur less frequently in print and some using nonsense words. But the decision to do so is not an assessment issue as much as an instructional issue. There certainly are differences of opinion among teachers about the value of practicing decoding nonsense words, with some teachers approving of the practice and others not. If the students have never been asked to read words out of context that are not familiar sight words, then they may not do well on the test. Teachers will then need to decide whether the students have the skills underlying the task, but are just not familiar with the format, or whether the students have not mastered the skills. It is at that point that teachers may consider providing some practice. Those who have the skill will improve dramatically with just a few minutes of practice, those without the skills will not.
Word Use Fluency
How does a teacher score if a child gives the sentence "I don't know what ____ is?"
If the child responds this way during one of the example items, then the teacher should provide feedback using a model of an appropriate response. If this happens during the test and the teacher believes that the student is responding that way because he/she doesn't know what the word means (i.e., a response equivalent to "I don't know"), then it should not be scored as correct. If the child appears to have created this sentence as an example of use of this word, and the same sentence frame is not used repeatedly, then it is ok to count it as a legitimate sentence. The response to one item will not have a big effect on the score. However, if the child repeats this sentence frame over and over, then the teacher should treat this as a red flag and retest the child at another time. Before retesting, the teacher can make sure that the student understands the task. If the child continues to give the same sentence over and over ("I don't know what _____ is"), the response should not be scored as correct.
How does a teacher handle it when a child doesn't understand the task?
Teachers may not lead, model, or prompt a response in the actual test, but can in the example. After the first item, say "you tell me a sentence." Some children may need multiple examples to get the idea.
Sometimes students provide answers using incorrect grammar. How should they be scored? Here are two examples:
1. I didn't get nothing at the store.
2. My sister hidden in the bushes.
Both should be counted as correct because grammar isn't considered in scoring and the utterance does not need to be a complete sentence. Based on the response, it is clear that the student knows what the target words mean. The advice in the manual is to score liberally if a teacher gets the impression that a student knows what the word means (from the manual: "Correct utterances are scored liberally. If the utterance conveys the accurate meaning of the word and could be correct, score it as correct.").
On the WUF, the level of complexity of the sentence models that the teachers give the students is very simple ("I like to jump rope." "The grass is green.") Some teachers have done a lot of classroom instruction around using more complex sentence structure in their sentences both orally and in writing. Can teachers use more sophisticated models and produce more complex responses? Is this appropriate if the teachers develop the models to be used for these grade levels and make sure that all teachers will be consistent in using the new models that are developed?
It is an approved accommodation to DIBELS to provide another example, so it is not a problem to add a third example when presenting WUF to a student. The examples that are listed, though, should never be excluded. At the same time, teachers should keep in mind that this is a timed test. It may not be in the students' best interest to make them think that they need to provide complex utterances. Complex sentences take more time to formulate and so may end up reducing students' total scores. The purpose of the test is not to assess the complexity of oral language (or knowledge of more advanced syntax), but to find out how quickly students can gain access to word meaning.
Some of the words used in WUF have multiple meanings. They sound the same in an oral context, but are spelled differently. For example, two of the words are "weight" and "per."
For words that have multiple meanings, responses that reflect understanding of any of the meanings are counted as correct. The student does not need to use the meaning that is reflected in the spelling.
Our students are unfamiliar with some of the WUF words (e.g., fisher). Since many of the students are not completing the entire list of words within the one-minute limit, how would the validity and reliability of the task be affected if teachers were to skip these less familiar words since there tend to be only one or two within each list?
The words on WUF were randomly selected from children's literature (few details are available about exact selection procedures). Teachers should not skip any words, but should be quick to prompt students with the next word if they are stuck on a particular item. Another point to remember is that WUF scores are evaluated using local percentiles. If a word is unfamiliar to most students, it will not affect their standing relative to others in their school (or in Maine).
Oral Reading Fluency
May students who are accustomed to using a tracking aid use one for ORF? The manual addresses this under "Acceptable Accommodations" but the answer is unclear.
This is a judgment call. Tracking can be helpful, but it can also be time-consuming as students move from line to line. It appears that the test developers are not anxious to encourage blanket use of tracking aids. Teachers do not need to routinely use tracking aids just because students use them in the classroom. It would be a good practice to test first without the aid, and then, for students who seem to lose their place, retest the next week with an aid. It would be interesting to see how much difference the use of a tracking aid makes in the scores.
Why does the format of ORF involve straight lines of text printed on white paper which is so different from the type of text students are used to reading?
Most tests of reading fluency that are timed do not include pictures and the passages are presented in a similar format as on ORF. It is actually more unusual than usual for an oral reading fluency test to mimic a picture book. The goal of ORF is to determine how quickly students can use word recognition and understanding of sentence/passage context to read the words. Not including pictures provides a "cleaner" measure of that skill. There has been considerable research supporting the validity of ORF. In these studies, ORF has been compared to other measures of skill (including informal reading inventories and running records) and ORF has been shown to be a better predictor of subsequent growth in reading. Clearly, students who are still highly dependent on pictures to read text will not do well on ORF, but that is what teachers need to know. During their day-to-day instruction, teachers can get information on how well students can function on print materials that include pictures and/or when reading is guided/scaffolded. ORF provides teachers with information about student progress in transferring what they are learning about print during guided/scaffolded reading.
On ORF, why is the median or middle score selected rather than calculating the mean or average of the three scores?
First, the middle score is quicker and easier to determine than the mean. Second, in a small distribution of scores, the mean (or average) is overly affected by an extreme score (one that is much higher or lower than the other two). For example, if a student scores 25, 100, 125, the mean would be 83, but the median (or middle) score would be 100. Most teachers would agree that 100 seems more representative of the student's skill than 83. The goal is to find what is apt to be a typical score for a student. The middle score captures that notion better than the mean by ruling out the lowest and highest score that the student obtained.
Should students in special education be progress monitored the same as regular classroom students?
Yes, this is recommended if the students receive reading instruction, but not for students with severe/profound disabilities who do not receive reading instruction.
How many passages should be used for progress monitoring ORF? Is it one, or the full three passages as in the benchmark assessment?
The progress monitoring procedures are more flexible. It is recommended to use one passage only, unless the teacher feels that a particular passage was not a valid representation of the student's skills. For benchmark assessments, three passages are used to get a more reliable estimate. This is important because decisions are made from these data regarding the student's risk status. It is not wrong to use three passages for progress monitoring, but that is quite time-consuming and may not yield enough of a benefit to justify the extra time. Of course, the three passages will give a more reliable estimate, but if teachers assess the small numbers of students that they are monitoring once a week with one passage, they should be able to detect, ideally, a trend of increasing fluency over time, even if some passages show what looks like a decrease.
The most straightforward approach is for teachers to use just one passage per occasion for progress monitoring, but monitor frequently (weekly is good) so that they get multiple samples over time. All progress monitoring data should be entered into the data base in the progress monitoring section. When teachers look at their graphs, they should look for the "big picture" trend, ignoring occasional peaks and valleys in the graph. The question is, "Is the student generally getting more fluent over time?"
NOTE: Profiles that are characterized by frequent deep valleys may indicate student reliance on familiarity with the passage content to facilitate word recognition. That's good information for diagnostic purposes.
What is the sequence for selecting and administering the passages?
All the passages are supposed to be approximately the same in terms of difficulty, so it does not matter which one is selected. It would be best to use the passages in order since that is easier for the teachers.
What is the procedure for selecting materials for a student who is not progress monitored at the beginning of the year, but then needs to be progress monitored later in the year?
DIBELS is based on a curriculum-based measurement model, which is a very different approach from assessments such as the DRA or the Observation Survey. It is not intended to identify a student's progress over increasingly difficult material. It is intended to identify a student's progress toward reaching a benchmark that has been designated as a learning goal for a particular point in time.
Therefore, for students who enter into progress monitoring at a later point in the year, teachers can use the same passages that they are using with everyone else for the sake of easier management. Because the passages are not intended to increase in difficulty across the benchmarks or within the progress monitoring booklets, it should not matter that much which passage is used. This is not a situation in which the teacher makes a judgment as to the level at which to begin testing.
Does the order of the passages reflect the progression and growth of the student through the months of each grade level?
No, because DIBELS is an assessment based on the curriculum-based measurement model. (See above.)
Is it acceptable to make passage choices based on teacher judgment?
Teachers will find some passages that they do not like or that they consider exceptionally easy or hard for students. Since there are so many passages to choose from, skipping a passage is acceptable.
Why are there no progress monitoring forms for Letter Naming Fluency?
The reason for this is that LNF is not designated as one of the five essential elements of reading. LNF correlates with subsequent progress in reading, but the skill of naming letters is not in itself felt to be critical to achieve reading success. That is, students could read without knowing the letter names (i.e., if they knew the letter sounds). At the same time, rapid recognition of letters is important. Certainly teachers can make their own materials quite easily to monitor progress informally, but the results will not be entered in the data base. The following is an excerpt from the manual:
Letter Naming Fluency (LNF) is intended for most children from fall of kindergarten through fall of first grade. A benchmark goal is not provided for LNF because it does not correspond to a big idea of early literacy skills (phonological awareness, alphabetic principle, and accuracy and fluency with connected text) and does not appear to be essential to achieve reading outcomes. However, students in the lowest 20 percent of a school district using local norms should be considered at risk for poor reading outcomes, and those between the 20th percentile and 40th percentile should be considered at some risk. For students at risk, the primary instructional goals should be in phonological awareness, alphabetic principle, and accuracy and fluency with connected text (manual, pg. 6)
|Copyright © 2007 All rights reserved.|