Sounds and awful lot like you're trying to justify half-assing a research question into a poster instead of doing the work to make a more fleshed out manuscript so you can pad the CV.
Then, surprise, you go for the appeal to authority of "I've done more of this than you have..." therefore, I'm right followed by ad hominem... This isn't a real argument, and it's funny how many physicians duck into the "well I've published X times, so yeah, I'm great at research" hole. My problem isn't that these investigations occur, it's that people don't employ the appropriate experts to make sure things are done well, and then they try to diffuse the responsibility to get the right people involved, for whatever reason. For whatever reason, statistics is a discipline that everyone and anyone feels they're qualified for because they have an excel sheet or SPSS...
I'm coming back to the primary issue that most physicians (whether MD, DO, MD-PhD, MD-MPH, whatever it may be) do not understand or use statistics well and often do so incorrectly. The worst part is that they, on average, don't admit how little they know, and hence contributed a huge amount to the replication and reproducibility crisis. I don't think your prior experience making posters or publishing manuscripts means you're well versed in the appropriate use and interpretation of statistics (you might be, I don't know you, but you might not be)-- often those reviewing have the same background as you do, which may be clinically outstanding, but sorely lacks in the statistical aspects. This then circles back to the issue that without adequate understanding of statistics, how are you sure you're judging when you need a statistician? Statistics and medicine are quite similar because we don't know what we don't know, and the problems like to hide in that region.
Again two issues: you're clearly trying to justify padding a CV with a "quick turnaround" rather than a meaningful paper and may be overzealous about your statistical abilities because "you've done this before." If you just admitted it was padding the CV rather than quality added to the field (i.e. doing the full paper), I wouldn't be calling it like it is...
Again, the danger in both statistics and medicine is what we don't know; physicians are almost never qualified to adequately speak on the former but frequently do so as if they are. I think a half-way clarified answer isn't the issue-- if you read my post, I said jd71 didn't even flesh out a research question...so it's not a research question that's half way answered, it's just an ill-defined question that doesn't seem worth answering in it's form. And I'd argue that the best thing is to sink time into one valuable question to investigate thoroughly rather than a bunch half-assedly that you never go back to finish just because you wanted to pad your CV. Again, if you reread my post, I don't think retrospective studies are invalid, I haven't said that. I've suggested the OP do the prudent thing and involve someone who knows what to do with the other aspect of the project because probably over 90% of physicians don't know squat about the stats despite their own publication record (most journal reviewers aren't statistically equipped).
Sadly, most of us won't revolutionize a field, but that doesn't preclude us from doing research the right way and involving the experts outside of our particular niche. You can still do good, non-groundbreaking research, though. And I'd hardly call a few superficial posters "productive." Everyone knows the time it takes to do those.
It's bizarre how often doctors will call pharmacy, cards, renal, but almost never dream of calling a statistician simply because "bob is good with SPSS"
Brief summary since there was confusion on whether I thought these studies are bad:
1) Retrospective studies aren't invalid, but if you're lazy they're likely to be
2) Shirking responsibility to consult experts on any research you want to somehow disseminate isn't congruent with good science
3) Publishing, even in good journals, doesn't usually indicate anything about the statistical quality or appropriateness of your work, because the blind often lead the blind (statistically, not clinical medicine) in peer review; only rarely does a qualified statistician review it
4) Your work may be clinically brilliant, but that's a separate element from the statistics, and both are critical to a good, sound paper