<< Chapter < Page Chapter >> Page >
Reliability and validity in assessment
The primary author of this module is Dr. Rosemary Sutton.

Reliability

Reliability refers to the consistency of the measurement (Linn&Miller 2005). Suppose Mr. Garcia is teaching a unit on food chemistry in his tenthgrade class and gives an assessment at the end of the unit using test items from the teachers' guide. Reliability is related to questions such as: Howsimilar would the scores of the students be if they had taken the assessment on a Friday or Monday? Would the scores have varied if Mr. Garcia had selecteddifferent test items, or if a different teacher had graded the test? An assessment provides information about students by using a specific measure ofperformance at one particular time. Unless the results from the assessment are reasonably consistent over different occasions, different raters, or differenttasks (in the same content domain) confidence in the results will be low and so cannot be useful in improving student learning.

There are 3 ways to assess the reliability of an assessment – Test-retest, equivalent forms, and internal consistency. Test-retest reliability evaluates a test’s consistency over time. In order to evaluate test-retest reliability, a teacher would compare students’ performance onthe same set of questions given at two points in time (e.g. two weeks apart). The equivalent forms method (also called parallel forms or alternate forms) of evaluating reliability compares students’ performance on two versions or forms of the same test. The internal consistency method of evaluating reliability is the only method that can be used with a single administration of an assessment. Internal consistency evaluates theconsistency of students’ responses within a single administration of a test. One of the simplest ways to evaluate the internal consistency of a testis the split-half method . In this method, a teacher compares students’ scores on two halves of the test (usually odds vs. evens) (Linn&Miller 2005).

The Test-retest, equivalent forms, and internal consistency methods of evaluating reliability address the test itself. Interrater reliability addresses the grading of assessments. Specifically, it addresses the question: Would scores have been different if a different teacher had graded the test? Inorder to evaluate interrater reliability a teacher compares the scores that two different graders give the same answers to a question. Interrater reliabilityis only a concern for subjectively graded items, since these items require graders to make interpretations (Linn&Miller 2005).

Obviously we cannot expect perfect consistency. Students' memory, attention, fatigue, effort, and anxiety fluctuate and so influence performance. Eventrained raters vary somewhat when grading assessment such as essays, a science project, or an oral presentation. Also, the wording and design of specificitems influence students' performances. However, some assessments are more reliable than others and there are several strategies teachers can use toincrease reliability.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Oneonta epsy 275. OpenStax CNX. Jun 11, 2013 Download for free at http://legacy.cnx.org/content/col11446/1.6
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Oneonta epsy 275' conversation and receive update notifications?

Ask