Quick tips for unpacking complicated scientific articles

Body Project Blog ~ Where Thought is The Active Ingredient

Body Project Blog ~ Where Thought is The Active Ingredient

Someone commented that the article I referenced in my last blog post looked interesting, but not easy to understand. Here’s the article. Proactive selective inhibition targeted at the neck muscles: this proximal constraint facilitates learning and regulates global control, Loram et al. 2016 Decide for yourself.

The good news is that even if you don’t have a background in statistics or knowledge of specific laboratory procedures, you can read selectively, which will give you a feel for the content of the research, and you can ask questions which will give you a sense of the quality of the study.

(Side note, if this post reads like a primer for an undergraduate course – you are correct – it is).

Start with the Abstract. That may be all you need.

If you want to know a bit more, read the Introduction and Discussion sections. The authors will outline their main research question (hypothesis) and cite supporting background information. At the end of the paper, the authors will explain all the complex mathematical results, elaborate on findings, provide alternate explanations and may even speculate on ways to improve the study/field in the future.

In general, you can skip the Methods and Results sections unless you want to evaluate how good the study actually was. If that’s the case, the first thing to look at is the sub-section labeled “participants”. A rule of thumb is that any conclusions drawn from studies with less than 30 participants in each comparison group can be categorized as preliminary. A large data pool ensures that results are not due to random chance. Often complex power analyses are conducted to assess the adequacy of the sample size, but, as a casual reader, the question, “Is the sample size (n) greater or less than 30?” will get you pretty far.  Ask yourself how the subjects were selected. Random selection is nearly impossible in social science and psychology research. It’s likely that the subjects selected will be from a specific demographic group. This may bias findings. Check also that the subjects were randomly assigned to test conditions and order of tests. Check if there was a control group. It’s amazing how many studies are published that don’t actually have this. In some study designs the same set of subjects serves as their own control.

Continue on through the Methods section looking at the study design and measurement techniques. Was double-blinding used? Again, blinding is difficult in research with humans. If not, think about how this may have affected results. For example, participants will often try to deliver the results they assume the researcher wants  Glance at the Results section and ask yourself how many tests were done. Were the tests based on the initial hypothesis, or did the researchers conduct hundreds of tests in an attempt to fish for patterns? The more tests conducted, the more likely that results are a fluke. Even if all these criteria are met, results are still often unrepeatable. Currently, there’s a crisis in psychological research. Many of the foundational studies don’t bear fruit upon replication. Usually this is ascribed to problems such as regression to the mean (i.e. extreme findings tending to return to average when measurements are repeated), and a publication bias (i.e. failed research doesn’t get published).

It gets easier from here on out. Look at the actual journal that the study was published in. Is it a peer reviewed journal? Was there an editorial board. Look for conflicts of interest. Who funded the study? Who were the researchers? All of this information should be clearly stated.

If, after all of this, you really want to go deep, go back to the Results. Look at all the diagrams, tables and charts. Do the charts depict what they say they are depicting? Can you make sense of the results even without a background in statistics? If you are still feeling hungry, go back to the Methods section. How were the variables measured and manipulated? What kind of data was collected? Were the instruments used to collect the data reliable? Does the data relate to the original questions being asked?

Finally, my favorite part: Comb through the Bibliography. What sources were cited? Sometimes quotes and facts are cherry picked, and if you are really interested, it’s a good idea to try and read the original studies that were used as background sources.

I’m sure you can come up with many questions that I haven’t thought of! Happy digging…