Analysing the Data

Misleading and false results are kept out of our research reports.

The hallmark of MetaMed’s research is a rigorous vetting process. We eliminate most of the studies reviewed, for reasons including:

  • The number of participants is too small to draw well-founded conclusions. With insufficient sample sizes, studies may lack the power to distinguish results from random chance.
     
  • Failure to control for confounding factors. This can mean that relationships revealed by the study are attributable to some other cause.
     
  • Biases in sampling, leading to a group that isn’t representative of the larger population. This can be introduced by the area subjects are recruited from, or the criteria used to recruit them, causing some demographics to be overrepresented and distorting the study’s findings.
     
  • Differing care to control and treatment groups, apart from the treatment under study. Any differences found could instead be attributable to additional counseling or treatments.
     
  • Failure of double-blinding. Even very subtle failures in randomization can have strong impacts on study results, mediated by the placebo effect.
     
  • Outcome drift, where researchers with their data in front of them change the way they calculate their results in order to ensure a significant finding.
     
  • Overeager subgroup analysis, in which researchers perform so many different tests on the data that some will likely be significant by chance alone.
     
  • Publication bias, where studies showing a positive result are more likely to be published than studies supporting a null hypothesis. Studies with apparently significant findings may receive widespread attention, while the many replications showing negative results are ignored.
     
  •  Errors in procedure, such as one exhibited by roughly 50% of neuroscience studies where authors assume that different effects must be at work because one result is statistically significant and another is not.