The fact is that most scientists have a rudimentary understanding of statistics, typically obtained from a few undergraduate courses in statistics taken en route to a scientific career, yet statistics underpins the critical determination of "statistical significance" of scientific data and the validity of scientific conclusions. Most scientists do not consult statisticians to validate and confirm their statistical conclusions, which inenviably leads to false assumptions and conclusions based upon such simplistic analyses. My own field of science suffers from over-reliance on p-values, arbitrarily considering data with a p-value of < 0.05 to be "statistically significant" or "true," vs. data with a p-value of > 0.05 to be "insignificant" or "false," and thus likely un-publishable. A 'skilled' scientist knows well how to play the game of torturing the data, throwing out outliers, adding assumptions, etc. to lower the p-value to a publishable and "true" "statistically significant" 0.05 or less.
A prominent example is Michael Mann's infamous "hockey stick" global temperature reconstruction, arguably the most widely debunked piece of research in the history of science, debunked by both the Republican statistical experts (Wegman et al) and Democrat statistical experts (North et al). Both Congressional statistical expert evaluations of Mann's hockey stick, in addition to numerous gross statistical errors, faulted Mann for not consulting any statisticians prior to publication of his paper.
Sadly, the article admits that arbitrary assumptions of "statistically significant p-values," which vary widely between different scientific fields, are widely misused and misunderstood by scientists and are "out of alignment" with current statistical reasoning, concluding, "let us hope that the next century will see much progress in the inferential methods of science as in it's substance."
Related: Is much of climate science useless?