*Science*(June 2, 2016) addresses a very long-standing problem pervasive in virtually all areas of science: statistical and scientific reasoning are often not aligned, and the "misunderstanding and misuse of statistical significance [by scientists] impedes science," according to the AAAS.

The fact is that most scientists have a rudimentary understanding of statistics, typically obtained from a few undergraduate courses in statistics taken en route to a scientific career, yet statistics underpins the critical determination of "statistical significance" of scientific data and the validity of scientific conclusions. Most scientists do not consult statisticians to validate and confirm their statistical conclusions, which inenviably leads to false assumptions and conclusions based upon such simplistic analyses. My own field of science suffers from over-reliance on p-values, arbitrarily considering data with a p-value of < 0.05 to be "statistically significant" or "true," vs. data with a p-value of > 0.05 to be "insignificant" or "false," and thus likely un-publishable. A 'skilled' scientist knows well how to play the game of torturing the data, throwing out outliers, adding assumptions, etc. to lower the p-value to a publishable and "true" "statistically significant" 0.05 or less.

A prominent example is Michael Mann's infamous "hockey stick" global temperature reconstruction, arguably the most widely debunked piece of research in the history of science, debunked by both the Republican statistical experts (Wegman et al) and Democrat statistical experts (North et al). Both Congressional statistical expert evaluations of Mann's hockey stick, in addition to numerous gross statistical errors, faulted Mann for not consulting any statisticians prior to publication of his paper.

Sadly, the article admits that arbitrary assumptions of "statistically significant p-values," which vary widely between different scientific fields, are widely misused and misunderstood by scientists and are "out of alignment" with current statistical reasoning, concluding, "let us hope that the next century will see much progress in the inferential methods of science as in it's substance."

Related: Is much of climate science useless?

https://judithcurry.com/2016/07/06/is-much-of-current-climate-research-useless/

Hmmmp.... a Lame excuse - 40 years ago, repeat 4 0 yrs ago I studied Science ( Agriculture - BSc) and part of first year course was STATISTICS .. so what's happened to our education system?

ReplyDeleteMuch of the research in ALL of the earth and life sciences is useless, because the general paradigm within which all these scientists are working is false (indeed, so obviously false as to be wrong-headed, which means the scientists refuse to admit TO THEMSELVES that it even COULD be wrong, and so the underlying problem cannot even be addressed). And the problem with "statistical reasoning" is that statisticians think it can guide physical reasoning, when the exact opposite is generally true. This is precisely where "hard" science degenerates into "soft" science, and the value of objective truth flies out the window, in favor of free (and easy) speculation. "Cause" and "effect" are not terms to be thrown around and too often reversed by "statistical reasoning".

ReplyDeleteAmen!

DeleteA big part of the problem is using stats as a form of confirmation bias.

ReplyDeletehttps://rclutz.wordpress.com/2016/05/22/beliefs-and-uncertainty-a-beyesian-primer/

A more recent example of Mann's lack of rudimentary statistical understanding is his paper published in the journal Nature -- supposedly the paragon of scientific inquiry - earlier this year. A statistician (W. Briggs) debunked the fundamental errors in the paper within a day or two after its online release. And yet it gets past peer-review anyway.

ReplyDeletehttp://wmbriggs.com/post/17849/

The Four Errors in Mann et al's "The Likelihood of Recent Record Warmth"

Michael E. Mann and four others published the peer-reviewed paper “The Likelihood of Recent Record Warmth” in Nature: Scientific Reports (DOI: 10.1038/srep19831). I shall call this authors of this paper “Mann” for ease. Mann concludes (emphasis original):

"We find that individual record years and the observed runs of record-setting temperatures were extremely unlikely to have occurred in the absence of human-caused climate change, though not nearly as unlikely as press reports have suggested. These same record temperatures were, by contrast, quite likely to have occurred in the presence of anthropogenic climate forcing."

This is confused and, in part, in error, as I show below. I am anxious people understand that Mann’s errors are in no way unique or rare; indeed, they are banal and ubiquitous. I therefore hope this article serves as a primer in how not to analyze time series.

Starting Biophysics 40 years ago (after a diploma in nuclear physics), I had my first encounter with statistical illiteracy among academic colleagues. A freshly promoted "Dr. rer. nat." (PhD in natural sciences, i.e. life sciences) right AFTER his final oral exam asked me to explain to him the difference between “standard deviation” and “error of the mean”. Working the rest of my active work life in a very multidisciplinary department, this was not the last time that I had to explain to life science PhD-students, post-docs and senior scientists alike such very basics of statistics. Actually, it was the rule and not the exception, not to speak of more ‘advanced’ concepts - more often than not to no avail.

ReplyDeleteRainer Facius

Great article with interesting insights, thanks for sharing

ReplyDeleteThere's a typo, I think, in paragraph two: the work 'enviably' should 'inevitably.'

ReplyDeleteThank you, typo corrected.

Delete