Using item test-retest stability (ITRS) as a criterion for item selection: An empirical study

Research output: Contribution to journalArticlepeer-review

Abstract

This paper proposes another criterion for empirical item-selection, namely, item test-retest stability (ITRS). Four tests were used in this research. Two of them were ability power tests and the other two were objective personality scales. These tests were administered twice to 100 students with a time interval of eight months in between. For each item in each test a phi correlation between the first and second administrations was calculated and then used as an ITRS index. For each test an abbreviated version was created, by selecting the items with the highest ITRS scores. The retest stability coefficients of the abbreviated and original tests were assessed with a new sample, consisting of another 100 students, and the abbreviated tests were found to be not lower than the respective coefficients of the original, longer tests. The ITRS and the more popular IIC (Item internal consistency) criterion were compared to each other in terms of their effects upon retest stability and internal consistency, and conclusions were drawn.

Original languageEnglish
Pages (from-to)847-852
Number of pages6
JournalEducational and Psychological Measurement
Volume37
Issue number4
DOIs
StatePublished - Dec 1977

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology
  • Applied Psychology
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Using item test-retest stability (ITRS) as a criterion for item selection: An empirical study'. Together they form a unique fingerprint.

Cite this