Fixed vs. Random Effects in Meta-Analysis

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

cmuhooligan

Full Member
15+ Year Member
Joined
Jan 15, 2005
Messages
312
Reaction score
2
Hi all,

I know the differences and the arguments for and against both of these methods, but I do have a question for y'all. I'm conducting a small meta-analysis (depending on the DV, I have anywhere from 4 to 12 effect sizes I'm analyzing). I ran the data with both methods (and a mixed-effects model too), and there were some important differences. I got interesting results in both, but the fixed-effects model included more sig. findings. I realize this could be because of type I errors, but it also could be because of increased power in this model (especially with small samples). So, given that random (or mixed effects) are generally preferred in the literature, I'm tempted to simply report the random effects results. However, I was thinking about reporting both, and simply noting the fixed-effect model results as "supplemental." What are your thoughts about this? I'm torn, as I don't want to leave out what could be legitimate differences, but I know that they could be spurious as well. I also don't think that this is very traditional--to report results from two different model.

Members don't see this ad.
 
If any of your variance component values are significant under the fixed model, you might as well just report the random effects model values. If not, I would report both.
 
Top