More Information

Internal Consistency Reliability

Certain quantities of interest in medicine, psychology, etc., can not be measured explicitly. Accordingly, the assessment is approached by asking a series of questions and combining the answers into a single numerical value, or by a scale such as pass/fail, yes/no, or other dichotomous items.

To form a scale in this manner requires internal consistency, i.e., the items should all measure the same construct. Aabel provides the following methods for internal consistency reliability estimates:

Cronbach's Alpha

  • For estimating the α coefficient, two or more items are required (k >= 2), and scores from each item should be stored in a separate data column.
  • The left-hand side image below shows part of a data set from a questionnaire with k = 10 items and n = 24. The right-hand side image shows the Cronbach's α, estimated for the full data set.

The value of coefficient a will be between 0 and 1 with the following implications:

Kuder-Richardson's ρ Formula 20 and Formula 21

This is a procedure that was initially devised for estimating reliability of a test items. Today, it is used as a coefficient of reliability for scales with dichotomous items (e.g., yes/no, pass/fail, correct/incorrect, etc. There are two slightly different versions of Kuder-Richardson's rho:

  • Kuder-Richardson Formula 20 (K-R 20)
  • Kuder-Richardson Formula 21 (K-R 21)

Data Requirements

A high reliability coefficient can only indicate that all items on the test are variations of the same skill or knowledge base. If the reliability coefficient is low, it may suggest the items on the test measures diverse knowledge or skills.