Capitalization on item calibration error in computer adaptive testing

Research output: Book/ReportReportOther research output

4 Downloads (Pure)

Abstract

In test assembly, a fundamental difference exists between algorithms that select a test sequentially or simultaneously. Sequential assembly allows us to optimize an objective function at the examinee’s ability estimate, such as the test information function in computerized adaptive testing. But it leads to the nontrivial problem of how to realize a set of content constraints on the test—a problem more naturally solved by a simultaneous item-selection method. Three main item-selection methods in adaptive testing offer solutions to this dilemma. The spiraling method moves item selection across categories of items in the pool proportionally to the numbers needed from them. Item selection by the weighted-deviations method (WDM) and the shadow test approach (STA) is based on projections of their future consequences. These two methods differ in that the former calculates a projection of a weighted sum of the attributes of the eventual test, and the latter, a projection of the test itself. The pros and cons of these methods were analyzed. An empirical comparison between the WDM and STA was also conducted for an adaptive version of the Law School Admission Test (LSAT), which showed equally good item-exposure rates but violations of some of the constraints and larger bias and inaccuracy of the ability estimator for the WDM.
Original languageEnglish
Place of PublicationNewton, PA, USA
PublisherLaw School Admission Council
Number of pages16
Publication statusPublished - 2005

Publication series

NameLSAC research report series
PublisherLaw School Admission Council
No.04-02

Fingerprint

Dive into the research topics of 'Capitalization on item calibration error in computer adaptive testing'. Together they form a unique fingerprint.

Cite this