P-value from destroyer

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

torobcheh21

Full Member
10+ Year Member
Joined
Feb 9, 2012
Messages
100
Reaction score
0
So for number 3 on test 12 in the math destroyer (2012), some of the answer choices talk about a 1% significance level. Are there situations dealing with a 1% significance level? I always assumed it was 5% but it's something I memorized and never really understood.

And at times when they give you two percentages when do you know which one to use for your p-value? like in number 2. I saw in problem 3 they added the percentages but again I don't understand why.

And I just don't understand the thought process in problem 4, does anyone understand that one pretty well?

If someone could clear this up I'd be grateful!
 
If the p value is less than .001 than it would be more revelant to say 1% significance than 5% because you could support your hypothesis even more as opposed to the null hypothesis of chance. The whole thing with p values is that your data supports that some manufacturer's claim is true due to some drug or something having a direct lowering effect, not chance. I had a hard time with it too, but I had no questions on it lol. So, if you throw a coin saw 14 times (from q vault) and you score 13 heads the p value would be 13!(13!1!), which would be statistically significant rather than due to chance (maybe the coin is weighed). If you scored 7 heads out of 10 then the p value would be 7!/(7!3!) + 8!/(8!2!) + 9!/(9!1!) + 10!/(10!0!) So you add count at least 7 heads etc
 
So for number 3 on test 12 in the math destroyer (2012), some of the answer choices talk about a 1% significance level. Are there situations dealing with a 1% significance level? I always assumed it was 5% but it's something I memorized and never really understood.

Significance level, sometimes called alpha, is defined a priori. It's an arbitrary value that you decide is appropriate for your experiment before you start. Most times, it is set at the 5% level, but you can set it to be more or less. When you set it at a lower value (e.g. 1%) you obviously find less things to be significant and you, by definition, accept the null hypothesis to be true. These things tend to be really really different from the null/average.

Some people think that p-values are interpreted incorrectly because we diminish a quantitative value into a binary (i.e. yes/no regarding the null hypothesis). Instead, these people believe that we should interpret the p-value as an index of compatibility between the null hypothesis and the data.
 
Top