I am kind of confused about the difference between standard error of measurement and confidence interval. So I can orient myself - a std error of measurement calculation tells us how sure we are that the true score (i.e., no error variance) lies between 2 numbers... For example, if a person gets a 50, we guess that is the true score and we can then say (after performing the calculations) that we are .68 confident that the true score lies between 46 and 54, etc.
A confidence interval includes both true variance and error variance AND tells us how often we think the person witll score in a certain range (i.e., at the .95 CI, we say that he will score between 44-55 95 percent of the time.
Is this right?
A confidence interval includes both true variance and error variance AND tells us how often we think the person witll score in a certain range (i.e., at the .95 CI, we say that he will score between 44-55 95 percent of the time.
Is this right?