Statistically significant, but no difference

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

ChessMaster3000

Full Member
10+ Year Member
Joined
Mar 7, 2010
Messages
866
Reaction score
295
Is there such a situation where your P value is less than 0.05, and it is shown that there is no difference among intervention/control? In other words, can you ever find no association and have THAT finding be statistically significant?

UW argues that if the p<0.05, then the confidence interval can't cross 1. But in the case of there being a statistically significant lack of an association, wouldn't the CI cross 1?

Members don't see this ad.
 
I'm not a stats expert, but I think by the way you have to define the null and experimental hypotheses this won't happen. That is to say that your experimental hypothesis is always that there is an association/difference, while the null hypothesis is obviously the opposite. If you achieve statistical significance and reject the null hypothesis you are by definition concluding that there is an association/difference. If you don't reach statistical significance then you have in fact confirmed that there is no association, which is what you seem to be after here.



inb4 someone who actually has a clue embarrasses me...
 
Is there such a situation where your P value is less than 0.05, and it is shown that there is no difference among intervention/control? In other words, can you ever find no association and have THAT finding be statistically significant?

UW argues that if the p<0.05, then the confidence interval can't cross 1. But in the case of there being a statistically significant lack of an association, wouldn't the CI cross 1?

I think watching a video or reading a bit more about p values/ statistical significance to get the general idea can help- UWorld and qbanks generally don't offer the best value for biostats IMO
 
By definition, if you have a p <0.05, there is a statistical difference between the control and treatment. However, you can argue that you've made a Type 2 error, which is corrected by repetition and upping the sample size.
 
Members don't see this ad :)
By definition, if you have a p <0.05, there is a statistical difference between the control and treatment. However, you can argue that you've made a Type 2 error, which is corrected by repetition and upping the sample size.

That's if you set your null to there being a difference?
 
I'm not a stats expert, but I think by the way you have to define the null and experimental hypotheses this won't happen. That is to say that your experimental hypothesis is always that there is an association/difference, while the null hypothesis is obviously the opposite. If you achieve statistical significance and reject the null hypothesis you are by definition concluding that there is an association/difference. If you don't reach statistical significance then you have in fact confirmed that there is no association, which is what you seem to be after here.



inb4 someone who actually has a clue embarrasses me...

Let's say you define your hypothesis as there is NO difference. Then what? A p value <0.05 indicates you have no difference, and a p>0.05 means you do? (I'm not saying this would happen on the test, but I think it would help understand the concepts better). Bottom line, I am trying to ascertain how one can have a CI that crosses 1 with it being statistically significant in some way.
 
You can have something that is statistically significant (p<0.05) but not clinically significant. Ie. your results are not likely to occur to chance but the actual difference it has made is minimal.
 
Let's say you define your hypothesis as there is NO difference. Then what? A p value <0.05 indicates you have no difference, and a p>0.05 means you do? (I'm not saying this would happen on the test, but I think it would help understand the concepts better). Bottom line, I am trying to ascertain how one can have a CI that crosses 1 with it being statistically significant in some way.

If p<0.05 then you can reasonably reject the stated null hypothesis. Your null hypothesis is set to 1. (In most cases) because you are hypothesising there is NO difference and therefore your incidence rate ratio is 1, meaning both groups are the same.

Therefore if your confidence interval crosses 1, it would imply that you cannot reject the null hypothesis and therefore the data is NOT statistically significant and p value > 0.05


(P-value < 0.05 can reasonably reject null hypothesis
P-value > 0.05 the data is consistent with the null hypothesis, so cannot reject null hypothesis)
 
Let's say you define your hypothesis as there is NO difference. Then what? A p value <0.05 indicates you have no difference, and a p>0.05 means you do? (I'm not saying this would happen on the test, but I think it would help understand the concepts better). Bottom line, I am trying to ascertain how one can have a CI that crosses 1 with it being statistically significant in some way.

All this "can't cross 1" talk sounds more like inferences being made on case-control and cohort studies, not clinical trials. If we're talking about a clinical trial we would be talking in terms of mean differences between treatment arms, and not odds ratios and relative risks. Correct?

That's how I'm interpreting what is written in FA 2014 pg 57 (FA 2013 pg 55).
 
All this "can't cross 1" talk sounds more like inferences being made on case-control and cohort studies, not clinical trials. If we're talking about a clinical trial we would be talking in terms of mean differences between treatment arms, and not odds ratios and relative risks. Correct?

That's how I'm interpreting what is written in FA 2014 pg 57 (FA 2013 pg 55).

That is correct. Mean difference/percentage difference null hypothesis is 0
Anything otherwise is null hypothesis = 1
 
That is correct. Mean difference/percentage difference null hypothesis is 0
Anything otherwise is null hypothesis = 1

So if we are indeed talking about a clinical trial with a drug intervention, then why would there be discussion about "CIs can't cross 1"?
 
So if we are indeed talking about a clinical trial with a drug intervention, then why would there be discussion about "CIs can't cross 1"?

That was the original question the poster asked. It appears to be that their interpretation of the p-value is the wrong way round.
 
That's if you set your null to there being a difference?

I didn't even know there was another way to state the null hypothesis lol. I haven't started studying any biostats though so I don't know anything other than a stats course I took 12 years ago.
 
I didn't even know there was another way to state the null hypothesis lol. I haven't started studying any biostats though so I don't know anything other than a stats course I took 12 years ago.

I just think it's easier to interpret these things the way FA has it written. Kaplan also does a fairly good job with Behavioral Science. OP might be making things unnecessarily complicated, and I also think that I misinterpreted what kind of study s/he was talking about. As long as you know what type of study you're dealing with and keep your alt. and null hypothesis straight, you should be able to make accurate statistical inferences with CIs. It doesn't have to be super complicated.
 
I think there's a lot of misinformation in this thread, and a lot of confusion about biostatistics overall.

Whenever you make a study, you must have your hypothesis be the null hypothesis, meaning that the "default" is that there is no difference or no correlation between X and Y.

For instance, if you are testing a new cancer drug in a clinical trial, your null hypothesis must be that there is no difference in giving this drug or a placebo.
If there is a difference, you need to make sure it is not due to chance, which is where the P value comes in.
If I had two groups each with one patient (one got the drug and one didn't) and the patient with the drug lived and the other died, the P value would be terrible because the study is underpowered.
If you had a large trial you could have a good P value even if there is a small difference because when you have a larger population, even smaller changes in outcome are less likely to be due to chance.
If the odds ratio or relative risk crosses unity (1) for the 95% confidence interval, by definition the P value must be over 0.05 and the null hypothesis is accepted that there is no difference.

There's really no such thing as a "statistically significant lack of an association" the way that you're thinking of it. There are clinical trials that are noninferiority trials, which is the closest thing you can get.
For instance, if patients with DVTs need an anticoagulant medication (such as warfarin) and a new drug company wants to make a new drug "drug B"... you could not ethically do a randomized placebo-controlled trial because warfarin has been demonstrated to be beneficial in patients with DVTs, so a placebo would be unethical because you're potentially withholding a valuable treatment in the placebo group. However, you could do a superiority trial (where you would look for a statistically significant difference in mortality or rates of PE or so forth, and if you found a difference where the 95% CI doesn't cross 1 you've found a statistically significant change), or you could do a noninferiority trial. A noninferiority trial you'd set your inferiority margin first. For instance, I will say that warfarin causes a bleeding rate per year of, lets say, 2%. I want to make sure drug B not inferior to warfarin (causing more bleeds) so I will make a study that has enough patients (powered) to detect a 0.5% difference in bleeding with a P value of 0.05. In that case, when I do the trial, if there's less than a 0.5% change in bleeding (e.g., warfarin was 2% and drug B was 2.1%) then the P value between those two values would be over 0.05 and you could say that there's a less than 0.5% difference in bleeding between the two drugs (as long as the study was well-designed, etc.).

So to answer the question by the OP, if the 95% CI crosses 1, the P value must by definition be higher than 0.05 and there is no statistically significant difference between "X" and "Y" (e.g., warfarin and drug B). That doesn't mean it's not important though -- because in the noninferiority trial depicted above, it's important because you have shown that drug B doesn't cause more bleeding than warfarin (they have similar rates of bleeding). However, your P value will still be over 0.05, and the power of your study will tell you how small of a difference you can expect to find.
 
I think there's a lot of misinformation in this thread, and a lot of confusion about biostatistics overall.

Whenever you make a study, you must have your hypothesis be the null hypothesis, meaning that the "default" is that there is no difference or no correlation between X and Y.

For instance, if you are testing a new cancer drug in a clinical trial, your null hypothesis must be that there is no difference in giving this drug or a placebo.
If there is a difference, you need to make sure it is not due to chance, which is where the P value comes in.
If I had two groups each with one patient (one got the drug and one didn't) and the patient with the drug lived and the other died, the P value would be terrible because the study is underpowered.
If you had a large trial you could have a good P value even if there is a small difference because when you have a larger population, even smaller changes in outcome are less likely to be due to chance.
If the odds ratio or relative risk crosses unity (1) for the 95% confidence interval, by definition the P value must be over 0.05 and the null hypothesis is accepted that there is no difference.

There's really no such thing as a "statistically significant lack of an association" the way that you're thinking of it. There are clinical trials that are noninferiority trials, which is the closest thing you can get.
For instance, if patients with DVTs need an anticoagulant medication (such as warfarin) and a new drug company wants to make a new drug "drug B"... you could not ethically do a randomized placebo-controlled trial because warfarin has been demonstrated to be beneficial in patients with DVTs, so a placebo would be unethical because you're potentially withholding a valuable treatment in the placebo group. However, you could do a superiority trial (where you would look for a statistically significant difference in mortality or rates of PE or so forth, and if you found a difference where the 95% CI doesn't cross 1 you've found a statistically significant change), or you could do a noninferiority trial. A noninferiority trial you'd set your inferiority margin first. For instance, I will say that warfarin causes a bleeding rate per year of, lets say, 2%. I want to make sure drug B not inferior to warfarin (causing more bleeds) so I will make a study that has enough patients (powered) to detect a 0.5% difference in bleeding with a P value of 0.05. In that case, when I do the trial, if there's less than a 0.5% change in bleeding (e.g., warfarin was 2% and drug B was 2.1%) then the P value between those two values would be over 0.05 and you could say that there's a less than 0.5% difference in bleeding between the two drugs (as long as the study was well-designed, etc.).

So to answer the question by the OP, if the 95% CI crosses 1, the P value must by definition be higher than 0.05 and there is no statistically significant difference between "X" and "Y" (e.g., warfarin and drug B). That doesn't mean it's not important though -- because in the noninferiority trial depicted above, it's important because you have shown that drug B doesn't cause more bleeding than warfarin (they have similar rates of bleeding). However, your P value will still be over 0.05, and the power of your study will tell you how small of a difference you can expect to find.

This makes sense to me except for the noninferiority part. If you did in fact find that there is a less than 0.5% change in bleeding, why would the P value be >0.05? How could you say there is a less than 0.5% difference if your p value was over 0.05?
 
This makes sense to me except for the noninferiority part. If you did in fact find that there is a less than 0.5% change in bleeding, why would the P value be >0.05? How could you say there is a less than 0.5% difference if your p value was over 0.05?

If there's a less than 0.5% change in bleeding, and the numbers are similar, there is no statistically significant difference between them, and the p value is greater than 0.05. We can only say there's no difference because the p value is higher than 0.05. If it was smaller, it would mean that the numbers were different and was statistically significant. In the noninferiority study, because you powered it to detect such a small difference and did not find it, the result is important, but because the values are not different than one another the p value is still going to be over 0.05.

So to answer your core question: "can you ever find no association and have THAT finding be statistically significant" -- the answer to your question in the context that you are asking it is no. If you find no association it can be an important finding, but the p value will be over 0.05. If you find no difference between the two numbers it was either because the study was underpowered to see a small difference between the two numbers and you missed it, the study design was poor, or if the study design was good and adequately powered for your question there just is no statistically significant difference between the two numbers.
 
If there's a less than 0.5% change in bleeding, and the numbers are similar, there is no statistically significant difference between them, and the p value is greater than 0.05. We can only say there's no difference because the p value is higher than 0.05. If it was smaller, it would mean that the numbers were different and was statistically significant. In the noninferiority study, because you powered it to detect such a small difference and did not find it, the result is important, but because the values are not different than one another the p value is still going to be over 0.05.

So to answer your core question: "can you ever find no association and have THAT finding be statistically significant" -- the answer to your question in the context that you are asking it is no. If you find no association it can be an important finding, but the p value will be over 0.05. If you find no difference between the two numbers it was either because the study was underpowered to see a small difference between the two numbers and you missed it, the study design was poor, or if the study design was good and adequately powered for your question there just is no statistically significant difference between the two numbers.

Gotcha. So the "non inferiority" part just applies to to how you power it, and not the alpha error or the p-value per se?
 
Top