Are technical writers able to predict the results of a reader-focused text evaluation? In this article we report a study with fifteen technical writers, who were asked to point out the reader problems in a public information brochure. The brochure was also evaluated with thirty readers from the target audience (using a combination of the plus-minus method, a questionnaire, and user protocols). The results of both kinds of text evaluation show little overlap. The technical writers only predicted a small proportion of the reader feedback, and produced a lot of new problem detections. In addition, there was little agreement among the technical writers with regard to their problem detections.