Standard error

Ollie123

Full Member
10+ Year Member
Feb 19, 2007
5,225
2,556
  1. Psychologist
    This is one of those things I feel like I should know but....apparently don't

    When you are putting error bars on a within-subjects variable, do you use the standard error for the mean? If so, why? Do error bars tell you anything for within subjects comparisons?

    I know this is how its commonly done for between-subjects designs. Problem is that those same error bars within subjects makes the results look massively insignificant. I think repeated measures uses standard error of the difference scores (based off my limited understanding of the math;) ), not the overall mean.
     

    Ollie123

    Full Member
    10+ Year Member
    Feb 19, 2007
    5,225
    2,556
    1. Psychologist
      See, I tried to do this and was told to stop showing confidence intervals and just present the standard error;) Showing confidence intervals makes sense to me...that's useful information. Standard error presented on its own doesn't seem to mean much.

      Even so, how would one show the 95% CI for a repeated measure? Its based off difference scores, so wouldn't anything with more than 2 levels would need multiple error bars for each mean value, one for each comparison. Unless I'm totally off-base in my understanding of this (which is entirely possible :) ).
       
      About the Ads

      Therapist4Chnge

      Neuropsych Ninja
      Moderator Emeritus
      Verified Expert
      15+ Year Member
      Oct 7, 2006
      21,960
      3,310
      The Beach
      1. Psychologist
        I've been taught to always show the 95% CI of the mean in charts, rather than standard error or SD.
        That's what I learned, as CI is more in context. The SD could be useful, but you'd need more room to add everything, and some people aren't wild about having junk all over the place (myself included).
         

        JockNerd

        Full Member
        10+ Year Member
        5+ Year Member
        Mar 28, 2007
        1,810
        9
        1. Psychology Student
          See, I tried to do this and was told to stop showing confidence intervals and just present the standard error;) Showing confidence intervals makes sense to me...that's useful information. Standard error presented on its own doesn't seem to mean much.

          Even so, how would one show the 95% CI for a repeated measure? Its based off difference scores, so wouldn't anything with more than 2 levels would need multiple error bars for each mean value, one for each comparison. Unless I'm totally off-base in my understanding of this (which is entirely possible :) ).

          I almost never do repeated measures designs, so this is mostly academic to me, but in my mind I'm seeing the charts making sense as either a bar chart or a line graph, with 95% CI error bars.

          You can just do what I do when confused about presentation, and cheat by looking up a study in any area that uses a similar analysis method.
           

          Ollie123

          Full Member
          10+ Year Member
          Feb 19, 2007
          5,225
          2,556
          1. Psychologist
            Standard practice seems to be presenting standard error, not confidence intervals for this type of design - that's what I see in the majority of papers in the better journals.

            I just don't like that standard, since it seems pointless and potentially misleading to me;)
             

            JockNerd

            Full Member
            10+ Year Member
            5+ Year Member
            Mar 28, 2007
            1,810
            9
            1. Psychology Student
              Standard practice seems to be presenting standard error, not confidence intervals for this type of design - that's what I see in the majority of papers in the better journals.

              I just don't like that standard, since it seems pointless and potentially misleading to me;)

              I'm 99% sure that those papers have it wrong and you should be presenting CIs.
               

              thewesternsky

              Full Member
              10+ Year Member
              Jan 30, 2007
              785
              77
              1. Post Doc
                I'm 99% sure that those papers have it wrong and you should be presenting CIs.

                Agreed. You should be presenting CIs; they can actually be visually meaningful. However, many (perhaps even most) studies still use SEs. I'm guilty of this myself. :S

                I'm changing my methods for future papers, though. (That, and using tables whenever possible. A decent table with difference scores is still my favourite way to present within-subject data.)

                If you're looking to use CIs on graphs, Masson and Loftus have a couple good papers on the subject that I've been using as resources/primers.

                http://web.uvic.ca/psyc/masson/LM.pdf
                http://web.uvic.ca/psyc/masson/ML.pdf
                 

                Ollie123

                Full Member
                10+ Year Member
                Feb 19, 2007
                5,225
                2,556
                1. Psychologist
                  Well, this is for a talk not a paper. While I agree tables are great for articles, tables in talks are the most painful thing ever;)

                  I'm glad I'm not alone in thinking presenting standard errors is...weird. It just plain didn't make any sense to me, even though everyone was telling me to do it that way. Quite frustrating.

                  Thanks for the links - I'll have to go digging through those and see what I can unearth. I'm still not sure how to go about presenting the CI's since it seems like there is more than 1 confidence interval per data point for multi-level factors, but maybe the paper explains. Or maybe I'm just confused about how the intervals are calculated. I'm not sure I'll have time to change it at this point, but hopefully I can figure something out. If not, at least I won't be the only one who does this!

                  Thanks everyone.
                   

                  thewesternsky

                  Full Member
                  10+ Year Member
                  Jan 30, 2007
                  785
                  77
                  1. Post Doc
                    General practice for publications comparing two outcomes, say in an experimental paper, is to graph error bars (standard error). This is supposed to be an estimate of the standard deviation of the error from the population mean. . . these ARE essentially confidence intervals. This is just a confidence interval that is derived from an estimate of an unknown quantity.

                    this is actually a pretty good explanation.

                    http://en.wikipedia.org/wiki/Standard_error_(statistics)

                    Good comment-- and you're right, this is the general practice.

                    One issue, as I understand it though, is that the error bars aren't all that visually meaningful. (I think this is what Ollie was getting at in his original post). For example, if you make a graph and include standard error bars for two means in a within-subjects design, you cannot tell whether or not the difference is significant by looking at whether the error bars overlap.

                    So, standard errors are the usual way of graphing these things... I'm just not sure that it's the *best* way. :)
                     

                    Ollie123

                    Full Member
                    10+ Year Member
                    Feb 19, 2007
                    5,225
                    2,556
                    1. Psychologist
                      Yeah, sorry for the confusion. I understand what standard error is (or at least I think so anyways:) ). I definitely realize that standard error HAS meaning, I'm just referring to the visual information it provides in a chart. Confidence intervals can tell you at a glance if the differences were significant, and give you a sense of how significant. Confidence intervals for repeated measures use a different standard error in the calculation of the confidence interval (I think).
                       

                      numbereight

                      Full Member
                      10+ Year Member
                      Jan 15, 2009
                      104
                      0
                      1. Psychology Student
                        thanks to everyone who posted info! i just did a poster figure of a bar graph with SEM's and im going to go back and calculate the CI's to see how they differ. stats is fun! :D


                        Yeah, sorry for the confusion. I understand what standard error is (or at least I think so anyways:) ). I definitely realize that standard error HAS meaning, I'm just referring to the visual information it provides in a chart. Confidence intervals can tell you at a glance if the differences were significant, and give you a sense of how significant. Confidence intervals for repeated measures use a different standard error in the calculation of the confidence interval (I think).
                         
                        About the Ads
                        This thread is more than 12 years old.

                        Your message may be considered spam for the following reasons:

                        1. Your new thread title is very short, and likely is unhelpful.
                        2. Your reply is very short and likely does not add anything to the thread.
                        3. Your reply is very long and likely does not add anything to the thread.
                        4. It is very likely that it does not need any further discussion and thus bumping it serves no purpose.
                        5. Your message is mostly quotes or spoilers.
                        6. Your reply has occurred very quickly after a previous reply and likely does not add anything to the thread.
                        7. This thread is locked.