Automated Feedback Can Improve Hypothesis Quality

Karel A. Kroeze* (Corresponding Author), Stéphanie M. van den Berg, Ard W. Lazonder, Bernard P. Veldkamp, Ton de Jong

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

6 Citations (Scopus)
179 Downloads (Pure)

Abstract

Stating a hypothesis is one of the central processes in inquiry learning, and often forms the starting point of the inquiry process. We designed, implemented, and evaluated an automated parsing and feedback system that informed students about the quality of hypotheses they had created in an online tool, the hypothesis scratchpad. In two pilot studies in different domains (“supply and demand” from economics and “electrical circuits” from physics) we determined the parser's accuracy by comparing its judgments with those of human experts. A satisfactory to high accuracy was reached. In the main study (in the “electrical circuits” domain), students were assigned to one of two conditions: no feedback (control) and automated feedback. We found that the subset of students in the experimental condition who asked for automated feedback on their hypotheses were much more likely to create a syntactically correct hypothesis than students in either condition who did not ask for feedback.

Original languageEnglish
Article number116
Number of pages14
JournalFrontiers in Education
Volume3
DOIs
Publication statusPublished - 4 Jan 2019

Keywords

  • Automated feedback
  • Context-free grammars
  • Hypotheses
  • Inquiry learning
  • Online learning environment

Fingerprint

Dive into the research topics of 'Automated Feedback Can Improve Hypothesis Quality'. Together they form a unique fingerprint.

Cite this