- Joined
- Oct 13, 2008
- Messages
- 4,492
- Reaction score
- 8,966
That is still conflating an argument about:
1) the prevailing statistical methods used in null hypothesis testing
WITH
2) the general concept of the null hypothesis.
Let me try to put forth an analogy. To my understanding, the initial vitamin D literature was skewed because of some mathematical errors. Then it was corrected. That's a statistical error. If I said that vitamin D is basically a non-issue, and cited that error, you'd probably express frustration that I am conflating the importance of a concept with an apt statistical methodological problem. That's what I'm trying to point out.
Right. That paper that I posted actually takes issue with number 2 much more directly than number 1, although it engages with the later briefly.
I genuinely don't understand this question, sorry, I'll have to ask for clarification as to how it connects to what I said.Okay, follow up:
1) Define which theoretical setting the research question falls within. Is this set theory, model theory, etc? This changes everything. Closed set? Then Friedman would agree with you, but you'd have a lot of other methodological problems.
2) Is the actual argument that the CONCEPT of the null hypothesis is wrong? If so, then why was Gelman cited?
The concept is not wrong in the formal mathematical sense within frequentist statistics; rather, given the structure of the world and the nature of the phenomena and measurements generally under study in biomedical and social science fields, it is an irrelevant, misleading, and not useful concept.
Genetics might be an exception, among other things there might plausibly be genuinely zero impact of a given particular SNP on a particular protein's expression or some other process of interest.
3) Is the actual argument that the prevailing statistical methods in null hypothesis significance testing is wrong? If so, then reconcile the stated opinion about the work cited with Gelman's statements that the prevailing methods should be included, but not retain the same importance.
I think you want to look at that paper again. They state in several sections in several different ways that much of the time statistical analysis probably shouldn't even result in p-values. It is not a proposal to slightly tweak existing methods. They also say that using any statistical threshold to decide whether something is not is not significant in a dichotomous way has the same problems.
It's not null hypothesis significance testing if you are not making a yes/no determination of significance. If you are comparing effect sizes, you do not need NHST. If you are assessing fit with a multi-level model, you do not need NHST. If you are postulating a causal graph a la Judah Pearl, you do not need NHST.
4) Or is this all just a debate style? I'm fine if it is.
No, this is substantive.