NS exam 1, BB question 18

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
D

deleted647690

This question required evaluating the two graphs in the passage (listed above) and determining whether the differences between two variables was statistically significant or not. How can you tell this? Are the bars along the top of the graph error bars? I'm not familiar with this type of formatting

Alright I would get the images and put them here, but I can't seem to figure out how to do it without having my name along the top of the images since it shows my username for Nextstep at the top. I tried editing and cropping them, but for some reason it kept uncropping them when I saved them and put them here

Members don't see this ad.
 
Last edited by a moderator:
@Next Step Tutor Could you explain how to examine statistical significance on the graphs for this problem? I always thought anything with a p value of <.05 was considered statistically significant.
 
In general, if the error bars do not overlap with each other, then there is statistical significance between the two variables.
 
  • Like
Reactions: 1 user
In general, if the error bars do not overlap with each other, then there is statistical significance between the two variables.


Thanks
Here are the images I was referencing.
I'm used to the traditional error bars that are attached to the tops of the bars in a bar graph, but I'm not really sure how to make sense of these


See how the error bars are listed horizontally with asterisks noting the p value? I'm confused, because the reasoning for why it wasn't choice D (which is the one I chose) was that "Degranulation relates to Figure 2, so we must look there to evaluate this statement. The passage implies that the extent of degranulation directly correlates to the amount of ECP found in the supernatant. For choice D to be true, all samples that were co-cultured with NK cells would need to display higher levels of supernatant ECP than samples containing Eos cells alone (those with a NK-to-Eos ratio of 0:1). However, the ECP levels for the 1:1 NK-to-Eos group are not higher to a statistically significant extent than the results for the 0:1 group. Thus, co-culture with NK cells did not always significantly increase degranulation, as measured by ECP level."

Looking at pic 2, they are saying that the difference between the 0:1 ratio and 1:1 ratio was not statistically significant, and I don't see how you can tell.


They said that B was correct based on pic 1

"Figure 1 shows eosinophil (Eos) activation as measured by expression of CD69. Thus, we should be able to use Figure 1 to evaluate the accuracy of choice B. The figure shows no statistically significant difference between samples with a 1:1 NK-to-Eos ratio and those with a 5:1 ratio, so choice B is supported by the experimental results."

I'm not sure how you can tell that the 1:1 and 5:1 are not significantly different. I guess it has to do with the error bar, which I am having difficulty understanding

pic 2.png
 

Attachments

  • pic 1.png
    pic 1.png
    240.8 KB · Views: 84
Members don't see this ad :)
Ah, I see your confusion now. So the bars like that mean that they are doing multiple comparisons. They are comparing each condition to several other conditions and each comparison will generate a p value because they are asking multiple questions. For instance, "Is 0:1 different from 5:1?" Yes, with p<0.05. "Is 0:1 different from 10:1?" Yes, with p<0.001. Do you see what I mean? The ends of the horizontal bars signify which conditions they are comparing. For conditions that don't have horizontal bars connecting them, either 1) the comparison was not significant with p>0.05 or 2) the comparison was not made. In this case, they want you to assume (1). Maybe that assumption was stated in the passage.
 
  • Like
Reactions: 1 user
Ah, I see your confusion now. So the bars like that mean that they are doing multiple comparisons. They are comparing each condition to several other conditions and each comparison will generate a p value because they are asking multiple questions. For instance, "Is 0:1 different from 5:1?" Yes, with p<0.05. "Is 0:1 different from 10:1?" Yes, with p<0.001. Do you see what I mean? The ends of the horizontal bars signify which conditions they are comparing. For conditions that don't have horizontal bars connecting them, either 1) the comparison was not significant with p>0.05 or 2) the comparison was not made. In this case, they want you to assume (1). Maybe that assumption was stated in the passage.

Ohhhhh. Thank you so much! That clears it up. I'm glad I can now understand this type of graph. I'm not sure if you know, but would this type of graph, and the traditional bar graph with vertical error bars be the two main types they would present? Are there other types of error bar presentations?
 
In general, if the error bars do not overlap with each other, then there is statistical significance between the two variables.


And from this statement you made, you were referring to traditional vertical error bars, right? By not overlapping, you mean something like this?

graph1.gif


Since these don't overlap, they are statistically different? If they overlap partially, does that just mean that, even though they are less statistically significant than if they weren't overlapping at all, they are still more statistically significant than if they were completely overlapping? So more overlap===less statistical significance
 
Ohhhhh. Thank you so much! That clears it up. I'm glad I can now understand this type of graph. I'm not sure if you know, but would this type of graph, and the traditional bar graph with vertical error bars be the two main types they would present? Are there other types of error bar presentations?

These are the two types of error bars that I am most familiar with. The vertical form is more common and that's what will likely occur on the MCAT with the highest frequency. That and p-values.
 
Since these don't overlap, they are statistically different? If they overlap partially, does that just mean that, even though they are less statistically significant than if they weren't overlapping at all, they are still more statistically significant than if they were completely overlapping? So more overlap===less statistical significance

Usually, error bars are drawn at 95% confidence intervals. In other words, if they do not overlap, then they are statistically significant. If they do overlap, the measurements are not different from each other in statistically significant terms (95% confidence). There's no gradient of statistical significance - it's either statistically-significant or it's not. There are, however, different standards for what statistical significance is. Most of the scientific community uses 95% confidence intervals or p < 0.05 to denote statistical significance.
 
  • Like
Reactions: 1 user
Usually, error bars are drawn at 95% confidence intervals. In other words, if they do not overlap, then they are statistically significant. If they do overlap, the measurements are not different from each other in statistically significant terms (95% confidence). There's no gradient of statistical significance - it's either statistically-significant or it's not. There are, however, different standards for what statistical significance is. Most of the scientific community uses 95% confidence intervals or p < 0.05 to denote statistical significance.
Thank you for the help, I appreciate it
 
@aldol16

If you don't mind, I have another question about error bars. When the error bars are closer to the x axis, does that mean that the data for that point is less statistically significant? I'm reviewing EK Exam 1, and that seems to be their reasoning on one of their questions.
 

Attachments

  • snip 1.PNG
    snip 1.PNG
    175.5 KB · Views: 76
No. Their answer is still correct - their question is just worded poorly and can be misleading. Statistical significance measures whether an effect is due to completely random fluctuations in data. In other words, say you're measuring the chance of skin cancer after being exposed to mutagen X. You measure incidence of skin cancer in two groups of 100 people each - one group that has been exposed to mutagen X and one that has not. Say the incidence of skin cancer is 25 in the group exposed to mutagen X and the incidence is 10 in the other, control group. Statistical significance measures the probability that you would get that difference in measurements simply due to random fluctuations in data. For instance, if you were to measure the incidence of cancer in 100 such control groups, what is the chance that you would get a measurement of 25? If the probability of your measurement giving you that number due to complete chance is less than 5%, we take it to be statistically significant.

As you can see, this is a binary test. Is your effect due to chance or is it not? There's no degrees of significance. Let's say you do the p test and see that 25 is significantly different from 10 with p < 0.05. Now you are informed of mutagen Y by another researcher. You then test that in a similar way. A group exposed to mutagen Y has a skin cancer incidence of 50. You also find that the chance of this measurement arising due to complete chance is < 5%. In other words, this 50 is statistically distinguishable from 10. Does this mean that this measurement is more statistically significant than the one above? No. They're both statistically significant with p < 0.05. Even if p < 0.01 here, they're both still statistically significant because the scientific bar is 0.05. Scientists don't say that one effect is more statistically different from another - just that they're statistically significant (p < 0.05).

Now, this doesn't mean that mutagens X and Y can't be clinically different. Mutagen Y could be a stronger determinant of skin cancer - but that's not the question we asked.
 
  • Like
Reactions: 1 user
So I'm a little confused, because there was another question I got wrong because I assumed that, since the error bars did not overlap, there was statistical significance (choice A). However, in their answer explanation, they said that the two bars (the 2nd and 3rd bar) were not statistically different (even though the error bars do not appear to overlap, hence I thought they were statistically different).
 

Attachments

  • snip 2.PNG
    snip 2.PNG
    312.7 KB · Views: 68
So I'm a little confused, because there was another question I got wrong because I assumed that, since the error bars did not overlap, there was statistical significance (choice A). However, in their answer explanation, they said that the two bars (the 2nd and 3rd bar) were not statistically different (even though the error bars do not appear to overlap, hence I thought they were statistically different).

I think you're confused with how error bars are typically portrayed in science. Error bars go both ways. Authors usually just show them going one way with the implicit assumption that it goes the other way - and it's just a 180 degree flip of the error bar shown. So see that really big error bar for condition 3 in your link? It actually extends the same length in the other direction and therefore overlaps with condition 2's error bar.
 
  • Like
Reactions: 1 user
I think you're confused with how error bars are typically portrayed in science. Error bars go both ways. Authors usually just show them going one way with the implicit assumption that it goes the other way - and it's just a 180 degree flip of the error bar shown. So see that really big error bar for condition 3 in your link? It actually extends the same length in the other direction and therefore overlaps with condition 2's error bar.


Ah, thanks for the clarification. That will probably be good to know for the exam haha
 
Top