For those scientists or statistically-inclined among us...

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I don't really understand the point of an arbitrary line dividing significant from non significant to begin with. Wouldn't it make more sense to just report the value and let it speak for itself wherever it falls on the continuum from suggestive to strong? It's not like p values are a difficult concept for career academics, you don't need to try and make a dumbed down binary categorization. Leave it up to the researches (and peer reviewers) whether it is worth publishing and then just put the values front and center.
 
I don't really understand the point of an arbitrary line dividing significant from non significant to begin with. Wouldn't it make more sense to just report the value and let it speak for itself wherever it falls on the continuum from suggestive to strong? It's not like p values are a difficult concept for career academics, you don't need to try and make a dumbed down binary categorization. Leave it up to the researches (and peer reviewers) whether it is worth publishing and then just put the values front and center.

I think one value of having a significance threshold is for graphical representations of the data - just from glancing at a figure, you can't always tell if something is meaningful. And let's be honest - a huge portion of people looking at scientific articles start by just glancing through the figures for the key details. That's where the neat little asterisks at the top of a bar chart come in handy.
 
I think one value of having a significance threshold is for graphical representations of the data - just from glancing at a figure, you can't always tell if something is meaningful. And let's be honest - a huge portion of people looking at scientific articles start by just glancing through the figures for the key details. That's where the neat little asterisks at the top of a bar chart come in handy.
So put p=.0xx above the bar instead! Convenient shorthand can't be a good enough reason to make up stupid binary boundaries. P of .055 and .045 being treated or represented as significantly different just makes no sense to me. Changing it to .0055 vs .0045 still doesn't make sense to me.
 
I don't really understand the point of an arbitrary line dividing significant from non significant to begin with. Wouldn't it make more sense to just report the value and let it speak for itself wherever it falls on the continuum from suggestive to strong? It's not like p values are a difficult concept for career academics, you don't need to try and make a dumbed down binary categorization. Leave it up to the researches (and peer reviewers) whether it is worth publishing and then just put the values front and center.

There are researchers who feel that way and refuse to dismiss a result just because the p value is 0.056 or draw conclusions if it's 0.043, my supervisor in undergrad was one. Some are moving away from it already, analyzing their data on the strength of correlation and not using words like significant or non-significant. Things will just get a bit messier and there will be more disagreement, but the world is a messy and disagreeable place.
 
There are researchers who feel that way and refuse to dismiss a result just because the p value is 0.056 or draw conclusions if it's 0.043, my supervisor in undergrad was one. Some are moving away from it already, analyzing their data on the strength of correlation and not using words like significant or non-significant. Things will just get a bit messier and there will be more disagreement, but the world is a messy and disagreeable place.

My PI was the same way. My first presentation to the group in undergrad I put asterisks for "significantly different" on a bar graph and I got grilled for it. It was showing the results of simulations which measured this quantity hundreds of thousands of times, even the tiniest most minute difference was going to be hugely significant but unlike in particle physics that wouldn't tell us anything substantially new about the changes in the simulation. Easy to abuse the P!
 
Top