When instructors assess students’ laboratory reports to appraise the underlying scientific reasoning, they disclose their own concerns, epistemological assumptions and beliefs about science. The analysis of such assessments (i.e. rubric-centred scores and corresponding justificatory comments) offer a wealth of insights that can be re-engaged in further improvements of the assessment tool and procedure, and in developments in formative assessment more generally. Such insights include concerns exceeding the rubric’s descriptions (about meaningfulness, exhaustiveness, implicitness, connectivity, true inquiry, relevance), while differences among assessors are exposed (regarding epistemic values, approaches to scoring, sensitivity). This contribution is part of a broader effort to promote students’ conducive scientific thinking and deep-learning in science and engineering education. It addresses the question(s): what does the assessors’ reasoning tell us about the ways in which formative assessment is conducted, and could ideally be? The empirical investigation connects to existing knowledge, and discusses issues of representativeness and granularity in formative assessment. The paper elaborates on the design and use of the assessment tool, and presents evidence supporting context-bound recommendations and general conclusions. It is proposed that developments in formative assessment will benefit from reconceptualisation of assessment criteria, as the result of a co-design activity that engages with the assessors’ epistemological concerns.