A primary activity of most bench researchers working in neurosciences is setting up and executing detailed experiments in order to understand and measure how the brain contributes to a variety of psychological, behavioral and biological processes. Typically such experiments involve a set of definable factors: a certain strain of mouse, a certain region of the brain being studied, a certain pharmacological treatment, a certain receptor. Consequently, a widespread and important information seeking task of such researchers is finding papers that describe similar experiments to the one the researcher is conducting. However, due to the overwhelming preference of researchers for searching in PubMed (which only contains abstracts), the propensity of researchers to favor few high level keywords related to the topic of study, and the tendency of abstracts to summarize the results of their experiments as opposed to the methods applied, finding papers with certain experimental procedures is incredibly time consuming and inefficient. It is a process in which success literally requires “getting lucky.” To solve this problem we have developed a novel search and discovery tool that categorizes hundreds of thousands of neuroscience papers based on the factors that are most important and unique to the experiments described in the paper; locates papers that describe similar methods based on those factors; and presents the results in a tabular format to facilitate comparison and refinement. Based on insights from studies of researcher information seeking and reading behavior, we present this tool in the context of the methods section of individual papers – the point at which researchers actually evaluate whether the paper they are reading is appropriate for their needs, and are therefore most likely to be interested in finding additional papers that describe similar experiments. The development of this tool has involved a range of information retrieval techniques including development of new neuroscience methodology ontologies, named entity extraction and natural language processing on a variety of biomedical and neuroscience concepts, and development of new methods for scoring relevance of concepts to individual papers and across the corpus. In addition we followed an iterative, agile development process while developing this tool including extensive research on information seeking behavior among neuroscientists, live testing of the tool on Elsevier’s full text platform ScienceDirect and continuous improvements to precision, recall and relevance. Development is ongoing, and we will be able to report the results in August.
|Publication status||Published - 2013|