Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

Gitta H. Lubke, Ian Campbell, Dan McArtor, Patrick Miller, Justin Luningham, Stéphanie Martine van den Berg

Research output: Contribution to journalArticleAcademicpeer-review

9 Citations (Scopus)
55 Downloads (Pure)

Abstract

Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the selected best-fitting model. This practice does not account for the possibility that due to sampling variability, a different model might be selected as the preferred model in a new sample from the same population. A previous study illustrated a bootstrap approach to gauge this model selection uncertainty using 2 empirical examples. This study consists of a series of simulations to assess the utility of the proposed bootstrap approach in multigroup and mixture model comparisons. These simulations show that bootstrap selection rates can provide additional information over and above simply relying on the size of AIC and BIC differences in a given sample.
Original languageEnglish
Pages (from-to)230-245
JournalStructural equation modeling
Volume24
Issue number2
DOIs
Publication statusPublished - 5 Dec 2017

Keywords

  • METIS-319154
  • IR-102349

Fingerprint Dive into the research topics of 'Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update'. Together they form a unique fingerprint.

  • Cite this