Copyright

Reliability Coefficient: Formula & Definition

An error occurred trying to load this video.

Try refreshing the page, or contact customer support.

Coming up next: What is the Americans with Disabilities Act of 1990? - History & Accessibility Guidelines

You're on a roll. Keep up the good work!

Take Quiz Watch Next Lesson
 Replay
Your next lesson will play in 10 seconds
  • 0:00 Reliabilityand the…
  • 1:36 Test-Retest Reliability
  • 2:40 Inter-Rater Reliability
  • 3:36 Split-Half Reliability
  • 4:18 Internal Consistency
  • 5:19 Lesson Summary
Add to Add to Add to

Want to watch this again later?

Log in or sign up to add this lesson to a Custom Course.

Log in or Sign up

Timeline
Autoplay
Autoplay
Speed

Recommended Lessons and Courses for You

Lesson Transcript
Instructor: Ninger Zhou

Ninger has taught in teacher education programs and has received her Ph.D. in Educational Psychology.

The reliability coefficient is a user-friendly way to show the consistency of a measure. In this lesson, we will become familiar with four methods for calculating the reliability coefficient.

Reliability and the Reliability Coefficient

What is your experience with reliable scales or tests? For example, have you ever used a scale to keep track of your weight? If your weight is generally consistent, huge fluctuations might mean that something is going very wrong in your body, or your scale is no longer reliable. If you subjected your scale to a reliability test, you might find that is has a very low reliability coefficient. A new scale, one that provides you with familiar readings, would most likely have a high reliability coefficient.

In testing situations, scales should provide us with reliable measurements that do not fluctuate dramatically when the things being measured remain the same. If someone's ability has not changed significantly, his/her test scores should not vary by much, no matter how many times he/she takes a test.

In social science, reliability describes the consistency of a measure. Reliability coefficient quantifies the degree of consistency. There may be many reasons why a test is not consistent, such as errors in assessment that occur when the testing environment has an influence on how the participants perform, or other issues related to how the tests are designed. Calculating the reliability coefficient can help us understand such errors in testing.

There are different ways to calculate the coefficient, including the four types of reliability coefficients we'll discuss here. Don't worry too much about how to do these calculations by hand. Statistical software, such as SAS and SPSS, can help you compute all four types of coefficients conveniently.

Test-Retest Reliability

Consider the following hypothetical scenario: You give your students a vocabulary test on February 26 and a retest on March 5. If there are no significant changes in your students' abilities, a reliable test given at these two different times should yield similar results. To find the test-retest reliability coefficient, we need to find out the correlation between the test and the retest. In this case, we can use the formula for the correlation coefficient, such as Pearson's correlation coefficient:

Test-retest

N is the total number of pairs of test and retest scores.

For example, if 50 students took the test and retest, then N would be 50. Following the N is the Greek symbol sigma, which means the sum of. xy means we multiply x by y, where x and y are the test and retest scores. If 50 students took the test and retest, then we would sum all 50 pairs of the test scores (x) and multiply them by the sum of retest scores (y).

Inter-Rater Reliability

Let's take a look at another hypothetical situation: You and a colleague are grading some student essay assignments together and want to see how consistent you both are when it comes to scoring. Here, you can use the inter-rater reliability formula to calculate how consistent the two of you have been when rating the assignments. The inter-rater reliability coefficient is often calculated as a Kappa statistic. The formula for inter-rater reliability Kappa is this:

In this formula, P observed is the observed percentage of agreement.

For example, if you and your colleague rate the same students exactly the same 18 out of 20 times, then you actually agreed on 90% of the ratings.

To unlock this lesson you must be a Study.com Member.
Create your account

Register to view this lesson

Are you a student or a teacher?

Unlock Your Education

See for yourself why 30 million people use Study.com

Become a Study.com member and start learning now.
Become a Member  Back
What teachers are saying about Study.com
Try it risk-free for 30 days

Earning College Credit

Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.

To learn more, visit our Earning Credit Page

Transferring credit to the school of your choice

Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.

Create an account to start this course today
Try it risk-free for 30 days!
Create An Account
Support