So...rather complex stats question that has me (and everyone else I've dumped this on) a bit stumped and I am hoping someone can help with. It has to do with how variance is partitioned out for time-varying covariates included in random effects models within SPSS.
First, the research question. People got drug or placebo. We used a paradigm where people were rating two different types of pictures. They did this at 3 time points, one at baseline and then follow-up sessions. Our interest is in the 3-way interaction. We want to see if the picture ratings converged more over time for people on the drug versus those on placebo. Hopefully that makes sense so far? At this point, let's only consider the first and last time point to make things a little simpler.
Okay, what set this off was us getting VERY different results when two people (myself and a faculty member) ran the same analysis using two different techniques...repeated measures ANOVA and ANCOVA. In theory, ANCOVA should be more powerful. It was...many times over. Our p value of interest went from 0.48 (repeated measures) to 0.02 (ANCOVA). While ANCOVA should be more powerful, no boost in power should be THAT sizable, which made me wonder if there was a fundamental difference in the questions they were asking.
Now part of the problem is that there was a time-varying covariate in the model (ratings to one type of picture). In other words, the ANCOVA was ran as follows: Drug as between-subjects factor, ratings for BOTH cue types at Time 1 as covariates, AND the neutral picture type at Time 3 as a covariate, with the experimental picture type at Time 3 as the DV. Given the inclusion of the neutral picture rating at Time 3 as a covariate, I was VERY doubtful ANCOVA was the appropriate choice since including covariates that occurred after the experimental manipulation is generally not appropriate (though there are exceptions...I'm not certain if this is one). Regardless, I had far more faith in the repeated measures analysis . That is where SPSS mixed came in - an alternative that operates similar to ANCOVA, but should properly account for time-varying covariates. The data was restructured for SPSS mixed, with time points stacked, and ran with all relevant variables and interactions included (time, drug, neutral picture response + all their interactions) with experimental picture response as the DV.
Oddly enough, the analysis using SPSS mixed produced results similar to ANCOVA...not repeated measures. I was suspicious so I mucked around with the data to try and isolate the cause. Here is what I found. When you add a constant to BOTH picture types at time 3 for ALL participants, you are changing the slopes of ALL lines across time by an equal amount. With repeated measures...this ONLY alters the main effect of time. That seems logical, and makes sense to me. Now the trick...for SPSS mixed, it alters the main effect of time, but it ALSO alters the drug x time interaction...substantially. I've double and triple checked everything and have NO explanation for why that would be occurring. I've made sure the datasets are identical and the same cases are being included. The same variables have been altered in the same way. I believe the results have something to do with the manner in which neutral picture ratings are partitioned out of the variance over time within the mixed model, because if I run the same analysis on the DIFFERENCE score for the two picture types THEN we see similar results between mixed models and repeated measures.
I'm trying to figure out the real reason for this (both from a mathematical and conceptual standpoint), but haven't been able to sort it out yet, nor has anyone I've asked. What would cause the addition of a constant to alter the interaction? How are the effects of time-varying covariates removed in SPSS?
If anyone has any ideas it would be greatly appreciated.
First, the research question. People got drug or placebo. We used a paradigm where people were rating two different types of pictures. They did this at 3 time points, one at baseline and then follow-up sessions. Our interest is in the 3-way interaction. We want to see if the picture ratings converged more over time for people on the drug versus those on placebo. Hopefully that makes sense so far? At this point, let's only consider the first and last time point to make things a little simpler.
Okay, what set this off was us getting VERY different results when two people (myself and a faculty member) ran the same analysis using two different techniques...repeated measures ANOVA and ANCOVA. In theory, ANCOVA should be more powerful. It was...many times over. Our p value of interest went from 0.48 (repeated measures) to 0.02 (ANCOVA). While ANCOVA should be more powerful, no boost in power should be THAT sizable, which made me wonder if there was a fundamental difference in the questions they were asking.
Now part of the problem is that there was a time-varying covariate in the model (ratings to one type of picture). In other words, the ANCOVA was ran as follows: Drug as between-subjects factor, ratings for BOTH cue types at Time 1 as covariates, AND the neutral picture type at Time 3 as a covariate, with the experimental picture type at Time 3 as the DV. Given the inclusion of the neutral picture rating at Time 3 as a covariate, I was VERY doubtful ANCOVA was the appropriate choice since including covariates that occurred after the experimental manipulation is generally not appropriate (though there are exceptions...I'm not certain if this is one). Regardless, I had far more faith in the repeated measures analysis . That is where SPSS mixed came in - an alternative that operates similar to ANCOVA, but should properly account for time-varying covariates. The data was restructured for SPSS mixed, with time points stacked, and ran with all relevant variables and interactions included (time, drug, neutral picture response + all their interactions) with experimental picture response as the DV.
Oddly enough, the analysis using SPSS mixed produced results similar to ANCOVA...not repeated measures. I was suspicious so I mucked around with the data to try and isolate the cause. Here is what I found. When you add a constant to BOTH picture types at time 3 for ALL participants, you are changing the slopes of ALL lines across time by an equal amount. With repeated measures...this ONLY alters the main effect of time. That seems logical, and makes sense to me. Now the trick...for SPSS mixed, it alters the main effect of time, but it ALSO alters the drug x time interaction...substantially. I've double and triple checked everything and have NO explanation for why that would be occurring. I've made sure the datasets are identical and the same cases are being included. The same variables have been altered in the same way. I believe the results have something to do with the manner in which neutral picture ratings are partitioned out of the variance over time within the mixed model, because if I run the same analysis on the DIFFERENCE score for the two picture types THEN we see similar results between mixed models and repeated measures.
I'm trying to figure out the real reason for this (both from a mathematical and conceptual standpoint), but haven't been able to sort it out yet, nor has anyone I've asked. What would cause the addition of a constant to alter the interaction? How are the effects of time-varying covariates removed in SPSS?
If anyone has any ideas it would be greatly appreciated.