TY - JOUR
T1 - A Mixture Model for Random Responding Behavior in Forced-Choice Noncognitive Assessment
T2 - Implication and Application in Organizational Research
AU - Peng, Siwei
AU - Man, Kaiwen
AU - Veldkamp, Bernard P.
AU - Cai, Yan
AU - Tu, Dongbo
N1 - Publisher Copyright:
© The Author(s) 2023.
PY - 2024/7
Y1 - 2024/7
N2 - For various reasons, respondents to forced-choice assessments (typically used for noncognitive psychological constructs) may respond randomly to individual items due to indecision or globally due to disengagement. Thus, random responding is a complex source of measurement bias and threatens the reliability of forced-choice assessments, which are essential in high-stakes organizational testing scenarios, such as hiring decisions. The traditional measurement models rely heavily on nonrandom, construct-relevant responses to yield accurate parameter estimates. When survey data contain many random responses, fitting traditional models may deliver biased results, which could attenuate measurement reliability. This study presents a new forced-choice measure-based mixture item response theory model (called M-TCIR) for simultaneously modeling normal and random responses (distinguishing completely and incompletely random). The feasibility of the M-TCIR was investigated via two Monte Carlo simulation studies. In addition, one empirical dataset was analyzed to illustrate the applicability of the M-TCIR in practice. The results revealed that most model parameters were adequately recovered, and the M-TCIR was a viable alternative to model both aberrant and normal responses with high efficiency.
AB - For various reasons, respondents to forced-choice assessments (typically used for noncognitive psychological constructs) may respond randomly to individual items due to indecision or globally due to disengagement. Thus, random responding is a complex source of measurement bias and threatens the reliability of forced-choice assessments, which are essential in high-stakes organizational testing scenarios, such as hiring decisions. The traditional measurement models rely heavily on nonrandom, construct-relevant responses to yield accurate parameter estimates. When survey data contain many random responses, fitting traditional models may deliver biased results, which could attenuate measurement reliability. This study presents a new forced-choice measure-based mixture item response theory model (called M-TCIR) for simultaneously modeling normal and random responses (distinguishing completely and incompletely random). The feasibility of the M-TCIR was investigated via two Monte Carlo simulation studies. In addition, one empirical dataset was analyzed to illustrate the applicability of the M-TCIR in practice. The results revealed that most model parameters were adequately recovered, and the M-TCIR was a viable alternative to model both aberrant and normal responses with high efficiency.
KW - 2024 OA procedure
KW - item response theory
KW - mixture model
KW - random responses
KW - forced-choice measures
UR - http://www.scopus.com/inward/record.url?scp=85164168180&partnerID=8YFLogxK
U2 - 10.1177/10944281231181642
DO - 10.1177/10944281231181642
M3 - Article
AN - SCOPUS:85164168180
SN - 1094-4281
VL - 27
SP - 414
EP - 442
JO - Organizational research methods
JF - Organizational research methods
IS - 3
ER -