Standard error

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Ollie123

Full Member
15+ Year Member
Joined
Feb 19, 2007
Messages
5,646
Reaction score
3,926
This is one of those things I feel like I should know but....apparently don't

When you are putting error bars on a within-subjects variable, do you use the standard error for the mean? If so, why? Do error bars tell you anything for within subjects comparisons?

I know this is how its commonly done for between-subjects designs. Problem is that those same error bars within subjects makes the results look massively insignificant. I think repeated measures uses standard error of the difference scores (based off my limited understanding of the math😉 ), not the overall mean.
 
I've been taught to always show the 95% CI of the mean in charts, rather than standard error or SD.
 
See, I tried to do this and was told to stop showing confidence intervals and just present the standard error😉 Showing confidence intervals makes sense to me...that's useful information. Standard error presented on its own doesn't seem to mean much.

Even so, how would one show the 95% CI for a repeated measure? Its based off difference scores, so wouldn't anything with more than 2 levels would need multiple error bars for each mean value, one for each comparison. Unless I'm totally off-base in my understanding of this (which is entirely possible 🙂 ).
 
I've been taught to always show the 95% CI of the mean in charts, rather than standard error or SD.
That's what I learned, as CI is more in context. The SD could be useful, but you'd need more room to add everything, and some people aren't wild about having junk all over the place (myself included).
 
See, I tried to do this and was told to stop showing confidence intervals and just present the standard error😉 Showing confidence intervals makes sense to me...that's useful information. Standard error presented on its own doesn't seem to mean much.

Even so, how would one show the 95% CI for a repeated measure? Its based off difference scores, so wouldn't anything with more than 2 levels would need multiple error bars for each mean value, one for each comparison. Unless I'm totally off-base in my understanding of this (which is entirely possible 🙂 ).

I almost never do repeated measures designs, so this is mostly academic to me, but in my mind I'm seeing the charts making sense as either a bar chart or a line graph, with 95% CI error bars.

You can just do what I do when confused about presentation, and cheat by looking up a study in any area that uses a similar analysis method.
 
Standard practice seems to be presenting standard error, not confidence intervals for this type of design - that's what I see in the majority of papers in the better journals.

I just don't like that standard, since it seems pointless and potentially misleading to me😉
 
Standard practice seems to be presenting standard error, not confidence intervals for this type of design - that's what I see in the majority of papers in the better journals.

I just don't like that standard, since it seems pointless and potentially misleading to me😉

I'm 99% sure that those papers have it wrong and you should be presenting CIs.
 
I'm 99% sure that those papers have it wrong and you should be presenting CIs.

Agreed. You should be presenting CIs; they can actually be visually meaningful. However, many (perhaps even most) studies still use SEs. I'm guilty of this myself. :S

I'm changing my methods for future papers, though. (That, and using tables whenever possible. A decent table with difference scores is still my favourite way to present within-subject data.)

If you're looking to use CIs on graphs, Masson and Loftus have a couple good papers on the subject that I've been using as resources/primers.

http://web.uvic.ca/psyc/masson/LM.pdf
http://web.uvic.ca/psyc/masson/ML.pdf
 
Well, this is for a talk not a paper. While I agree tables are great for articles, tables in talks are the most painful thing ever😉

I'm glad I'm not alone in thinking presenting standard errors is...weird. It just plain didn't make any sense to me, even though everyone was telling me to do it that way. Quite frustrating.

Thanks for the links - I'll have to go digging through those and see what I can unearth. I'm still not sure how to go about presenting the CI's since it seems like there is more than 1 confidence interval per data point for multi-level factors, but maybe the paper explains. Or maybe I'm just confused about how the intervals are calculated. I'm not sure I'll have time to change it at this point, but hopefully I can figure something out. If not, at least I won't be the only one who does this!

Thanks everyone.
 
General practice for publications comparing two outcomes, say in an experimental paper, is to graph error bars (standard error). This is supposed to be an estimate of the standard deviation of the error from the population mean. . . these ARE essentially confidence intervals. This is just a confidence interval that is derived from an estimate of an unknown quantity.

this is actually a pretty good explanation.

http://en.wikipedia.org/wiki/Standard_error_(statistics)

Good comment-- and you're right, this is the general practice.

One issue, as I understand it though, is that the error bars aren't all that visually meaningful. (I think this is what Ollie was getting at in his original post). For example, if you make a graph and include standard error bars for two means in a within-subjects design, you cannot tell whether or not the difference is significant by looking at whether the error bars overlap.

So, standard errors are the usual way of graphing these things... I'm just not sure that it's the *best* way. 🙂
 
Yeah, sorry for the confusion. I understand what standard error is (or at least I think so anyways🙂 ). I definitely realize that standard error HAS meaning, I'm just referring to the visual information it provides in a chart. Confidence intervals can tell you at a glance if the differences were significant, and give you a sense of how significant. Confidence intervals for repeated measures use a different standard error in the calculation of the confidence interval (I think).
 
thanks to everyone who posted info! i just did a poster figure of a bar graph with SEM's and im going to go back and calculate the CI's to see how they differ. stats is fun! 😀


Yeah, sorry for the confusion. I understand what standard error is (or at least I think so anyways🙂 ). I definitely realize that standard error HAS meaning, I'm just referring to the visual information it provides in a chart. Confidence intervals can tell you at a glance if the differences were significant, and give you a sense of how significant. Confidence intervals for repeated measures use a different standard error in the calculation of the confidence interval (I think).
 
Top