In 1995, two carefully sampled groups of 14-year-old Dutch students participated in a project that included a test of their mathematics achievements. Each group received a different test, but there was an overlap of test items with the purpose of comparing the outcomes of the two tests. These items originated from the Third International Mathematics and Science Study (TIMSS) achievement test for Population 2 (Grades 7 and 8). No significant difference was expected to occur in the performances of the two student groups on the overlapping items. Yet, the differences in the performances of the two student groups were statistically significant. This could not be ascribed to differences in the two samples, which were representing the same population. In no way did they differ in socioeconomic backgrounds, in gender ratio, in age, in school tracks, nor was there a difference in the circumstances under which both tests were administered. It was suggested that the only parameter that could have influenced the differences was the sequence of the items (Kuiper, Bos, & Plomp, 1997; 2000). Further study indicated that the differences in the levels of success on an item could be correlated to the differences in the levels of success on the immediately preceding item. Thus, the research question came to be: Is the performance level on a single item in a cognitive test affected by the level of difficulty of the preceding item? The passage from a “hard” item to the next might show a disturbance in the performance on the latter. Whenever only a few students mastered a preceding item, the percentage of correct scores on the following item would be lower than the performance on the same item of the control group. This evidence is described below in the first section, “TIMSS and the National Option Test in the Netherlands”. To replicate these findings, a further study was carried out, which uses data from the TIMSS achievement test administered in 1995 (TIMSS-1995) and repeated in 1999 (TIMSS-1999). The backbone for this study is the TIMSS test design, whereby the test is assembled in eight different test booklets. Clusters of items are rotated through test booklets, resulting in eight different sequences of item clusters. The study investigates students’ performances on these different sequences. The design of this study, the statistics and results constitute the core of this paper. At the end conclusions and points for further discussion are presented.