Poor match

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Waldeinsamkeit

Full Member
7+ Year Member
Joined
Mar 15, 2014
Messages
137
Reaction score
110
Just looking for advice. I was really excited for match and I ended up ranking very low (#8) on my rank list. Programs 1-6 on my list were all good, and I think I would've been happy at most of them. I don't think this is a bad program, but there is not much research (an interest) and I don't think it places well into fellowships. Even though it isn't a perfect match, I am glad I included it bc at least I matched somewhere. The people are very nice, and the hours seem good too.
But what do I do now? Should I just forget about research and fellowships?
Any advice is appreciated.
Please don't be dicks.
 
Just make the best of it because what else can you do? You might be able to transfer to another city but that is never guaranteed. I don’t see why you can forget about research though. I’m sure you can find case reports or other pubs to publish while you are in residency.
 
Just looking for advice. I was really excited for match and I ended up ranking very low (#8) on my rank list. Programs 1-6 on my list were all good, and I think I would've been happy at most of them. I don't think this is a bad program, but there is not much research (an interest) and I don't think it places well into fellowships. Even though it isn't a perfect match, I am glad I included it bc at least I matched somewhere. The people are very nice, and the hours seem good too.
But what do I do now? Should I just forget about research and fellowships?
Any advice is appreciated.
Please don't be dicks.

You said you don’t “think” it places well into fellowships and research is an interest and there isn’t much. To be honest, you can excel at any program and match into a good fellowship even if it’s not the best ever training program. Why don’t you find out more details about fellowship matches given that “thinking” is different than knowing. What specialty is this? What fellowships are you interested in? Each of these questions are important for how to go in with the right expectations
 
You said you don’t “think” it places well into fellowships and research is an interest and there isn’t much. To be honest, you can excel at any program and match into a good fellowship even if it’s not the best ever training program. Why don’t you find out more details about fellowship matches given that “thinking” is different than knowing. What specialty is this? What fellowships are you interested in? Each of these questions are important for how to go in with the right expectations
It doesn't place well in fellowships. Hopefully that is more clear.
 
I matched into an awesome fellowship program coming from a community residency program with not a lot of research opportunities. It can be done, you just have to be more proactive in finding good opportunities.

I did have some attendings that did research at my residency. And really any research will look good. Find out who does research in your program.

Alternatively, you can do research with outside programs while in residency. This is what I did (and a few other co-residents did). It can be done, you just have to look for it.

An idea would be to do away rotations for your electives if you want to try for LORs from more well known people in your field. Some of my co-residents did this. My fellowship doesn’t really require aways, and I was able to get awesome LORs from attendings in my residency program in the specialty I was applying to. They had some connections to other fellowships. And even attendings outside my chosen specialty volunteered to call programs on my behalf for me.

Don’t give up, it can be done.
 
Just looking for advice. I was really excited for match and I ended up ranking very low (#8) on my rank list. Programs 1-6 on my list were all good, and I think I would've been happy at most of them. I don't think this is a bad program, but there is not much research (an interest) and I don't think it places well into fellowships. Even though it isn't a perfect match, I am glad I included it bc at least I matched somewhere. The people are very nice, and the hours seem good too.
But what do I do now? Should I just forget about research and fellowships?
Any advice is appreciated.
Please don't be dicks.

Work with the cards you were dealt
Network, discuss, and work hard with your fellow residents and attendings
 
  • Like
Reactions: M&L
I feel like I'm in a similar spot man. My top programs on my rank list were amazing places that I would have loved being at but I matched somewhere lower instead without all the prestige and fellowship clout.

Nothing we can do now except move forward, kick butt, and do our best. Maybe they didn't place well in fellowships in the past, but they also didn't have you. You're your own person and you know how hard you're willing to work to achieve the things that you want.

Let's get at it, homie. All the best.
 
Just looking for advice. I was really excited for match and I ended up ranking very low (#8) on my rank list. Programs 1-6 on my list were all good, and I think I would've been happy at most of them. I don't think this is a bad program, but there is not much research (an interest) and I don't think it places well into fellowships. Even though it isn't a perfect match, I am glad I included it bc at least I matched somewhere. The people are very nice, and the hours seem good too.
But what do I do now? Should I just forget about research and fellowships?
Any advice is appreciated.
Please don't be dicks.

Be an excellent resident. That will help you the most getting into a fellowship.

Do your best with the “research”. You don’t have much time in residency to do too much “research” anyway. Pick a topic every year. Get the IRB to allow you to do a chart review with deidentified data. A very easy thing to get past them. Collect data. Crunch stats. Submit abstract for poster to meeting. Present. Add to CV.

Plan for a chief year.

Good luck.
 
Be an excellent resident. That will help you the most getting into a fellowship.

Do your best with the “research”. You don’t have much time in residency to do too much “research” anyway. Pick a topic every year. Get the IRB to allow you to do a chart review with deidentified data. A very easy thing to get past them. Collect data. Crunch stats. Submit abstract for poster to meeting. Present. Add to CV.

Plan for a chief year.

Good luck.
Try to get a statistician involved if you want to publish good research. Physicians are notoriously bad at anything related to statistics, and usually even more so when they're more confident that they're good at understanding and using statistics. Don't contribute to all of the clinically excellence publications that are just trash given that the conclusions don't hold water due to poor statistical methodology.
 
Try to get a statistician involved if you want to publish good research. Physicians are notoriously bad at anything related to statistics, and usually even more so when they're more confident that they're good at understanding and using statistics. Don't contribute to all of the clinically excellence publications that are just trash given that the conclusions don't hold water due to poor statistical methodology.

Dude. I’m talking about doing some retrospective chart review for a poster. Lol. Calm the **** down.
 

43661967-3068-43E7-9B8E-8669DF20F9F5.jpeg
 
When I opened up my match letter and realized I wouldn’t be going home for another three years after working so hard the past four I just stood up and walked out. I know how you feel but in those three years I met my wife and my bff clone brosef @AlmostAnMD . In the end though it works out if you make the best of it. Don’t give up on a fellowship. Work hard, make connections, be positive, and it can happen. Good luck.
 
Dude. I’m talking about doing some retrospective chart review for a poster. Lol. Calm the **** down.
But you want to publish right? Retrospective doesn't excuse a PI from doing high quality work. You have a responsibility for your work and retrospective doesn't preclude doing things correctly.

I've literally seen people at meetings collecting information from poster presentations, so it's unreasonable to claim this would just be a CV fluffer, and if it is, why put your time into it to only benefit yourself?
 
Last edited by a moderator:
But you want to publish right? Retrospective doesn't excuse a PI from doing high quality work. You have a responsibility for your work and retrospective doesn't preclude doing things correctly.

I've literally seen people at meetings collecting information from poster presentations, so it's unreasonable to claim this would just be a CV fluffer, and if it is, why put your time into it to only benefit yourself?

You miss the point. Collecting retrospective data on a simple question, say differences in length of stay at your hospital for community acquired pneumonia for those with culture positivity vs those with unknown microbiology doesn’t need a “statistician”. Simple software and data is all that is needed.
 
You miss the point. Collecting retrospective data on a simple question, say differences in length of stay at your hospital for community acquired pneumonia for those with culture positivity vs those with unknown microbiology doesn’t need a “statistician”. Simple software and data is all that is needed.
I think you miss the point if you don't understand how severely confounding can lead you astray and if you only look at the culture status. You can very easily conclude the wrong thing without accounting for other factors that will influence LOS or you employ the wrong approach. If you don't account for any other factors that can influence LOS, what use is that question to clinicians or patients (and who the heck would accept that as a research question as is)?
You just want to tell people that group A stayed longer than B but we don't know how valid that inference is when we actually include other realistic drivers of LOS?

What would you propose is a good plan to answer the question?

This then comes back to if it isn't a good question and only serves to inflate your CV for the publish or perish culture, why are you doing the work?

You're right that some scenarios aren't very complicated, but the vast majority of "simple" situations that physicians designate as "simple" aren't actually that simple or straight forward. You didn't really define a full research question above, so your question may end up being simple but without more clearly defining "differences" (i.e. in what, the mean, median, variability? in what will change dramatically how you approach the question as will the "why") and why the question is being asked, it's basically half-way clarified.
 
I think you miss the point if you don't understand how severely confounding can lead you astray and if you only look at the culture status. You can very easily conclude the wrong thing without accounting for other factors that will influence LOS or you employ the wrong approach. If you don't account for any other factors that can influence LOS, what use is that question to clinicians or patients (and who the heck would accept that as a research question as is)?
You just want to tell people that group A stayed longer than B but we don't know how valid that inference is when we actually include other realistic drivers of LOS?

What would you propose is a good plan to answer the question?

This then comes back to if it isn't a good question and only serves to inflate your CV for the publish or perish culture, why are you doing the work?

You're right that some scenarios aren't very complicated, but the vast majority of "simple" situations that physicians designate as "simple" aren't actually that simple or straight forward. You didn't really define a full research question above, so your question may end up being simple but without more clearly defining "differences" (i.e. in what, the mean, median, variability? in what will change dramatically how you approach the question as will the "why") and why the question is being asked, it's basically half-way clarified.

You aren’t trying to answer all questions or address all confounders with a simple retrospective review and statistical analysis on a simple question in a poster. It’s simply a snapshot of interesting data. A contribution to the conversation. A potential jumping off point. You draw your best conclusions and make a short discussion.

I would wager I’ve done one or two more of these than you and can tell you that you are overreacting.

If your problem is that these types of small brief and limited investigations occur then I would humbly suggest you don’t really understand the scientific process as much as you think you do.
 
I think you miss the point if you don't understand how severely confounding can lead you astray and if you only look at the culture status. You can very easily conclude the wrong thing without accounting for other factors that will influence LOS or you employ the wrong approach. If you don't account for any other factors that can influence LOS, what use is that question to clinicians or patients (and who the heck would accept that as a research question as is)?
You just want to tell people that group A stayed longer than B but we don't know how valid that inference is when we actually include other realistic drivers of LOS?

What would you propose is a good plan to answer the question?

This then comes back to if it isn't a good question and only serves to inflate your CV for the publish or perish culture, why are you doing the work?

You're right that some scenarios aren't very complicated, but the vast majority of "simple" situations that physicians designate as "simple" aren't actually that simple or straight forward. You didn't really define a full research question above, so your question may end up being simple but without more clearly defining "differences" (i.e. in what, the mean, median, variability? in what will change dramatically how you approach the question as will the "why") and why the question is being asked, it's basically half-way clarified.
I agree with @jdh71 , you're over-reacting. Yeah you're going to get a half-way clarified answer, but that's better than not even recognizing that there is a question, which is where you are before someone puts in the time to do these sorts of retrospective discussions. You're correct that the "publish or perish" culture is real and is a problem in some instances, but to declare that research is invalid just because it's retrospective or done without stats support is going too far.

And frankly, the point behind the OP bothering to do this work isn't to revolutionize whatever field he wants to subspecialize in. It's to show that he's motivated and cares enough about matching a subspecialty that he's willing to do it even when he's a tired, burnt out resident, and that even if he isn't coming from a prestigious program he will be productive in fellowship.
 
When I opened up my match letter and realized I wouldn’t be going home for another three years after working so hard the past four I just stood up and walked out. I know how you feel but in those three years I met my wife and my bff clone brosef @AlmostAnMD . In the end though it works out if you make the best of it. Don’t give up on a fellowship. Work hard, make connections, be positive, and it can happen. Good luck.
This. My disappointment and shakeup of my plans changed my life in the best way. OP, charge in to your residency full steam and don't look back. The "what ifs" will kill your spirit.
 
You aren’t trying to answer all questions or address all confounders with a simple retrospective review and statistical analysis on a simple question in a poster. It’s simply a snapshot of interesting data. A contribution to the conversation. A potential jumping off point. You draw your best conclusions and make a short discussion.

I would wager I’ve done one or two more of these than you and can tell you that you are overreacting.

If your problem is that these types of small brief and limited investigations occur then I would humbly suggest you don’t really understand the scientific process as much as you think you do.
Sounds and awful lot like you're trying to justify half-assing a research question into a poster instead of doing the work to make a more fleshed out manuscript so you can pad the CV.

Then, surprise, you go for the appeal to authority of "I've done more of this than you have..." therefore, I'm right followed by ad hominem... This isn't a real argument, and it's funny how many physicians duck into the "well I've published X times, so yeah, I'm great at research" hole. My problem isn't that these investigations occur, it's that people don't employ the appropriate experts to make sure things are done well, and then they try to diffuse the responsibility to get the right people involved, for whatever reason. For whatever reason, statistics is a discipline that everyone and anyone feels they're qualified for because they have an excel sheet or SPSS...

I'm coming back to the primary issue that most physicians (whether MD, DO, MD-PhD, MD-MPH, whatever it may be) do not understand or use statistics well and often do so incorrectly. The worst part is that they, on average, don't admit how little they know, and hence contributed a huge amount to the replication and reproducibility crisis. I don't think your prior experience making posters or publishing manuscripts means you're well versed in the appropriate use and interpretation of statistics (you might be, I don't know you, but you might not be)-- often those reviewing have the same background as you do, which may be clinically outstanding, but sorely lacks in the statistical aspects. This then circles back to the issue that without adequate understanding of statistics, how are you sure you're judging when you need a statistician? Statistics and medicine are quite similar because we don't know what we don't know, and the problems like to hide in that region.

Again two issues: you're clearly trying to justify padding a CV with a "quick turnaround" rather than a meaningful paper and may be overzealous about your statistical abilities because "you've done this before." If you just admitted it was padding the CV rather than quality added to the field (i.e. doing the full paper), I wouldn't be calling it like it is...

I agree with @jdh71 , you're over-reacting. Yeah you're going to get a half-way clarified answer, but that's better than not even recognizing that there is a question, which is where you are before someone puts in the time to do these sorts of retrospective discussions. You're correct that the "publish or perish" culture is real and is a problem in some instances, but to declare that research is invalid just because it's retrospective or done without stats support is going too far.

And frankly, the point behind the OP bothering to do this work isn't to revolutionize whatever field he wants to subspecialize in. It's to show that he's motivated and cares enough about matching a subspecialty that he's willing to do it even when he's a tired, burnt out resident, and that even if he isn't coming from a prestigious program he will be productive in fellowship.

Again, the danger in both statistics and medicine is what we don't know; physicians are almost never qualified to adequately speak on the former but frequently do so as if they are. I think a half-way clarified answer isn't the issue-- if you read my post, I said jd71 didn't even flesh out a research question...so it's not a research question that's half way answered, it's just an ill-defined question that doesn't seem worth answering in it's form. And I'd argue that the best thing is to sink time into one valuable question to investigate thoroughly rather than a bunch half-assedly that you never go back to finish just because you wanted to pad your CV. Again, if you reread my post, I don't think retrospective studies are invalid, I haven't said that. I've suggested the OP do the prudent thing and involve someone who knows what to do with the other aspect of the project because probably over 90% of physicians don't know squat about the stats despite their own publication record (most journal reviewers aren't statistically equipped).
Sadly, most of us won't revolutionize a field, but that doesn't preclude us from doing research the right way and involving the experts outside of our particular niche. You can still do good, non-groundbreaking research, though. And I'd hardly call a few superficial posters "productive." Everyone knows the time it takes to do those.

It's bizarre how often doctors will call pharmacy, cards, renal, but almost never dream of calling a statistician simply because "bob is good with SPSS" :laugh:

Brief summary since there was confusion on whether I thought these studies are bad:
1) Retrospective studies aren't invalid, but if you're lazy they're likely to be
2) Shirking responsibility to consult experts on any research you want to somehow disseminate isn't congruent with good science
3) Publishing, even in good journals, doesn't usually indicate anything about the statistical quality or appropriateness of your work, because the blind often lead the blind (statistically, not clinical medicine) in peer review; only rarely does a qualified statistician review it
4) Your work may be clinically brilliant, but that's a separate element from the statistics, and both are critical to a good, sound paper
 
Sounds and awful lot like you're trying to justify half-assing a research question into a poster instead of doing the work to make a more fleshed out manuscript so you can pad the CV.

Then, surprise, you go for the appeal to authority of "I've done more of this than you have..." therefore, I'm right followed by ad hominem... This isn't a real argument, and it's funny how many physicians duck into the "well I've published X times, so yeah, I'm great at research" hole. My problem isn't that these investigations occur, it's that people don't employ the appropriate experts to make sure things are done well, and then they try to diffuse the responsibility to get the right people involved, for whatever reason. For whatever reason, statistics is a discipline that everyone and anyone feels they're qualified for because they have an excel sheet or SPSS...

I'm coming back to the primary issue that most physicians (whether MD, DO, MD-PhD, MD-MPH, whatever it may be) do not understand or use statistics well and often do so incorrectly. The worst part is that they, on average, don't admit how little they know, and hence contributed a huge amount to the replication and reproducibility crisis. I don't think your prior experience making posters or publishing manuscripts means you're well versed in the appropriate use and interpretation of statistics (you might be, I don't know you, but you might not be)-- often those reviewing have the same background as you do, which may be clinically outstanding, but sorely lacks in the statistical aspects. This then circles back to the issue that without adequate understanding of statistics, how are you sure you're judging when you need a statistician? Statistics and medicine are quite similar because we don't know what we don't know, and the problems like to hide in that region.

Again two issues: you're clearly trying to justify padding a CV with a "quick turnaround" rather than a meaningful paper and may be overzealous about your statistical abilities because "you've done this before." If you just admitted it was padding the CV rather than quality added to the field (i.e. doing the full paper), I wouldn't be calling it like it is...



Again, the danger in both statistics and medicine is what we don't know; physicians are almost never qualified to adequately speak on the former but frequently do so as if they are. I think a half-way clarified answer isn't the issue-- if you read my post, I said jd71 didn't even flesh out a research question...so it's not a research question that's half way answered, it's just an ill-defined question that doesn't seem worth answering in it's form. And I'd argue that the best thing is to sink time into one valuable question to investigate thoroughly rather than a bunch half-assedly that you never go back to finish just because you wanted to pad your CV. Again, if you reread my post, I don't think retrospective studies are invalid, I haven't said that. I've suggested the OP do the prudent thing and involve someone who knows what to do with the other aspect of the project because probably over 90% of physicians don't know squat about the stats despite their own publication record (most journal reviewers aren't statistically equipped).
Sadly, most of us won't revolutionize a field, but that doesn't preclude us from doing research the right way and involving the experts outside of our particular niche. You can still do good, non-groundbreaking research, though. And I'd hardly call a few superficial posters "productive." Everyone knows the time it takes to do those.

It's bizarre how often doctors will call pharmacy, cards, renal, but almost never dream of calling a statistician simply because "bob is good with SPSS" :laugh:

Brief summary since there was confusion on whether I thought these studies are bad:
1) Retrospective studies aren't invalid, but if you're lazy they're likely to be
2) Shirking responsibility to consult experts on any research you want to somehow disseminate isn't congruent with good science
3) Publishing, even in good journals, doesn't usually indicate anything about the statistical quality or appropriateness of your work, because the blind often lead the blind (statistically, not clinical medicine) in peer review; only rarely does a qualified statistician review it
4) Your work may be clinically brilliant, but that's a separate element from the statistics, and both are critical to a good, sound paper


Who hurt you....it's not that deep...
 
Sounds and awful lot like you're trying to justify half-assing a research question into a poster instead of doing the work to make a more fleshed out manuscript so you can pad the CV.

Then, surprise, you go for the appeal to authority of "I've done more of this than you have..." therefore, I'm right followed by ad hominem... This isn't a real argument, and it's funny how many physicians duck into the "well I've published X times, so yeah, I'm great at research" hole. My problem isn't that these investigations occur, it's that people don't employ the appropriate experts to make sure things are done well, and then they try to diffuse the responsibility to get the right people involved, for whatever reason. For whatever reason, statistics is a discipline that everyone and anyone feels they're qualified for because they have an excel sheet or SPSS...

I'm coming back to the primary issue that most physicians (whether MD, DO, MD-PhD, MD-MPH, whatever it may be) do not understand or use statistics well and often do so incorrectly. The worst part is that they, on average, don't admit how little they know, and hence contributed a huge amount to the replication and reproducibility crisis. I don't think your prior experience making posters or publishing manuscripts means you're well versed in the appropriate use and interpretation of statistics (you might be, I don't know you, but you might not be)-- often those reviewing have the same background as you do, which may be clinically outstanding, but sorely lacks in the statistical aspects. This then circles back to the issue that without adequate understanding of statistics, how are you sure you're judging when you need a statistician? Statistics and medicine are quite similar because we don't know what we don't know, and the problems like to hide in that region.

Again two issues: you're clearly trying to justify padding a CV with a "quick turnaround" rather than a meaningful paper and may be overzealous about your statistical abilities because "you've done this before." If you just admitted it was padding the CV rather than quality added to the field (i.e. doing the full paper), I wouldn't be calling it like it is...



Again, the danger in both statistics and medicine is what we don't know; physicians are almost never qualified to adequately speak on the former but frequently do so as if they are. I think a half-way clarified answer isn't the issue-- if you read my post, I said jd71 didn't even flesh out a research question...so it's not a research question that's half way answered, it's just an ill-defined question that doesn't seem worth answering in it's form. And I'd argue that the best thing is to sink time into one valuable question to investigate thoroughly rather than a bunch half-assedly that you never go back to finish just because you wanted to pad your CV. Again, if you reread my post, I don't think retrospective studies are invalid, I haven't said that. I've suggested the OP do the prudent thing and involve someone who knows what to do with the other aspect of the project because probably over 90% of physicians don't know squat about the stats despite their own publication record (most journal reviewers aren't statistically equipped).
Sadly, most of us won't revolutionize a field, but that doesn't preclude us from doing research the right way and involving the experts outside of our particular niche. You can still do good, non-groundbreaking research, though. And I'd hardly call a few superficial posters "productive." Everyone knows the time it takes to do those.

It's bizarre how often doctors will call pharmacy, cards, renal, but almost never dream of calling a statistician simply because "bob is good with SPSS" :laugh:

Brief summary since there was confusion on whether I thought these studies are bad:
1) Retrospective studies aren't invalid, but if you're lazy they're likely to be
2) Shirking responsibility to consult experts on any research you want to somehow disseminate isn't congruent with good science
3) Publishing, even in good journals, doesn't usually indicate anything about the statistical quality or appropriateness of your work, because the blind often lead the blind (statistically, not clinical medicine) in peer review; only rarely does a qualified statistician review it
4) Your work may be clinically brilliant, but that's a separate element from the statistics, and both are critical to a good, sound paper

There is nothing “half assed” or incorrect about a retrospective chart review for abstract and poster. Not everything is a paper or needs to be a paper. The point of the abstract isn’t to write a paper.

You do NOT need a statistician for these simple questions.

There was no ad hominem.

I did not make a fallacious appeal to authority. My athurority is I’ve done more than a few of these. They weren’t half assed. The stats were straightforward. They didn’t need a statistician. You are simply wrong.
 
There is nothing “half assed” or incorrect about a retrospective chart review for abstract and poster. Not everything is a paper or needs to be a paper. The point of the abstract isn’t to write a paper.
Your example of a valid research question seemed pretty half-assed. The half-assed part is pretending you don't need an expert. You've yet to make any kind of point how you have enough expertise to judge if you need a statistician.

You do NOT need a statistician for these simple questions.
Again, how are you so sure? I'm suggesting (with plenty of evidence) physicians aren't as competent at statistics as they like to think, but you keep stomping your feet without any real justification for your response.

There was no ad hominem.
Yet...
If your problem is that these types of small brief and limited investigations occur then I would humbly suggest you don’t really understand the scientific process as much as you think you do.

Hot on the tail of "I've done this more"...you set up the idea that you're the authority, then suggest I supported a statement I don't (that merely these smaller studies are problematic...they aren't but lack of expertise is problematic), then you lay the claim that I don't have understanding of the scientific process. You don't really have any real background information except the fact that I've disagreed with you, yet you tried to direct your argument at me rather than against the idea I keep bringing up: physicians are back at statistics and generally lack the knowledge to know when they're out of their depth with it, which occurs relatively quickly.

I did not make a fallacious appeal to authority. My athurority is I’ve done more than a few of these. They weren’t half assed. The stats were straightforward. They didn’t need a statistician. You are simply wrong.
That is an appeal to authority rather than addressing the point that I'll continue to make to say physicians are not good at statistics, in general, and they frequently don't admit when they're out of range. This is really straight forward and you don't address it each time you come back to say "but nah I'm good and I'm right."

So, are you saying your research experience means you have adequate statistical knowledge to judge the half-assedness of the statistical components of your projects? The question remains, how are you a valid judge of whether you needed a statistician?

If I drive my car daily does that mean I have enough knowledge to assess why there's smoke coming from under the hood? I mean, I drive every day, sooo

And to be clear, I'm not questioning any of your clinical knowledge.
 
Do you think there is a problem, in general, with biomedical research quality?
Biomed research brought us CAR T cells and checkpoint blockades. I think in general we can always improve but think about ops postion. Do you think his instiution will have a strong and vast biostat department?
 
Biomed research brought us CAR T cells and checkpoint blockades. I think in general we can always improve but think about ops postion. Do you think his instiution will have a strong and vast biostat department?
So you didn't really answer my general question about biomedical research. You can always find examples. In fact, a lot of the good research does involve actual statisticians as team members, so I wouldn't be surprised if the CAR T cells and check point inhibitor studies have statisticians. There are also exceptions to the rule, but overall, biomedical research suffers from many of the problems that are prevalent in the social sciences and basic sciences (i.e. overconfidence in one's own ability with statistics and the desire to go for quantity over quality). Also, finding a statistician is part of rationing resources and directing efforts toward the more valuable projects (but I think the availability of biostats to him is irrelevant since he failed to mention it the many times he insisted a statistician is not necessary, which implies he has some kind of knowledge to make that assessment).

For decades, there have been articles on the misuse of statistics by physician researchers, including in top journals. All these recent articles (JAMA had one not too long ago) about changing significance cutoffs and moving toward confidence intervals over p-values is greatly prompted by physicians and biomedical researchers (non statisticians) misusing or misunderstanding the methodologies.

The attitude of not needing a statistician comes from the idea that they're primarily number crunchers and programmers. I've done work with some people who are considered prominent clinician researchers and they almost verbatim have put forth this view of statisticians and statistics. They make it through peer review because the reviewers may be clinically strong but are statistically weak and therefore, can't do that part of the review adequately (which is why you see so many "Table 1/Demographics" tables with a million p-values...it makes zero sense to have p-values in those tables).
 
So you didn't really answer my general question about biomedical research. You can always find examples. In fact, a lot of the good research does involve actual statisticians as team members, so I wouldn't be surprised if the CAR T cells and check point inhibitor studies have statisticians. There are also exceptions to the rule, but overall, biomedical research suffers from many of the problems that are prevalent in the social sciences and basic sciences (i.e. overconfidence in one's own ability with statistics and the desire to go for quantity over quality). Also, finding a statistician is part of rationing resources and directing efforts toward the more valuable projects (but I think the availability of biostats to him is irrelevant since he failed to mention it the many times he is insisting himself it's not necessary, which implies he has some kind of knowledge to make that assessment).

For decades, there have been articles on the misuse of statistics by physician researchers, including in top journals. All these recent articles (JAMA had one not too long ago) about changing significance cutoffs and moving toward confidence intervals over p-values is greatly prompted by physicians and biomedical researchers (non statisticians) misusing or misunderstanding the methodologies.

The attitude of not needing a statistician comes from the idea that they're primarily number crunchers and programmers. I've done work with some people who are considered prominent clinician researchers and they almost verbatim have put forth this view of statisticians and statistics. They make it through peer review because the reviewers may be clinically strong but are statistically weak and therefore, can't do that part of the review adequately (which is why you see so many "Table 1/Demographics" tables with a million p-values...it makes zero sense to have p-values in those tables).
I agree but i dont think this is really helpful for ops orginal problem
 
I agree but i dont think this is really helpful for ops orginal problem
I think it is helpful for the OP just to ask what's available and use what's available. I've seen a school that has "rip roaring" departments for research without much statistical support (bad idea, usually), and I know of other schools with quiet departments (not much research) where there is great statistical support available. So, I think the OP should at least make the attempt to find this before getting deep in a project and wasting a lot of time like many med studets/residents/attendings do to themselves and, worst of all, contributing poorly in the name of a CV line (which is more likely without the right help).
 
Your example of a valid research question seemed pretty half-assed. The half-assed part is pretending you don't need an expert. You've yet to make any kind of point how you have enough expertise to judge if you need a statistician.

Again, how are you so sure? I'm suggesting (with plenty of evidence) physicians aren't as competent at statistics as they like to think, but you keep stomping your feet without any real justification for your response.

Yet...

Hot on the tail of "I've done this more"...you set up the idea that you're the authority, then suggest I supported a statement I don't (that merely these smaller studies are problematic...they aren't but lack of expertise is problematic), then you lay the claim that I don't have understanding of the scientific process. You don't really have any real background information except the fact that I've disagreed with you, yet you tried to direct your argument at me rather than against the idea I keep bringing up: physicians are back at statistics and generally lack the knowledge to know when they're out of their depth with it, which occurs relatively quickly.

That is an appeal to authority rather than addressing the point that I'll continue to make to say physicians are not good at statistics, in general, and they frequently don't admit when they're out of range. This is really straight forward and you don't address it each time you come back to say "but nah I'm good and I'm right."

So, are you saying your research experience means you have adequate statistical knowledge to judge the half-assedness of the statistical components of your projects? The question remains, how are you a valid judge of whether you needed a statistician?

If I drive my car daily does that mean I have enough knowledge to assess why there's smoke coming from under the hood? I mean, I drive every day, sooo

And to be clear, I'm not questioning any of your clinical knowledge.

I'm not sure exactly what you are looking for here ? Do I need to give a big list of my bona fides here? Like a list of publications, abstracts, and posters, along with the number of years in the lab?

I will apologize if you were not suggesting that doing simple or basic retrospective analysis were just "CV padding". You mentioned a "manuscript" and dealing with all confounders - this isn't what you are doing with a poster and abstract, so again, I will apologize if you actually know a lot about a lot, but your words didn't make it sound like you did.

At this point all you can really do is accuse me of lying when I say I know when I'm out of my statistical depth, and I am not a statistician and have used them with those papers I have published. You simply don't need a statistician for these single question chart reviews for poster presentations.

At this point though, this discussion is way far afield of OP. Good luck.
 
I'm not sure exactly what you are looking for here ? Do I need to give a big list of my bona fides here? Like a list of publications, abstracts, and posters, along with the number of years in the lab?
Nope, but you conveniently keep saying you've done many of these which maybe supports a clinical understanding but does zero for the stats. I've actually suggested that it's irrelevant how long your CV is, but if this is where you hang your hat of pride, I can understand why you want to keep bringing it up.

I will apologize if you were not suggesting that doing simple or basic retrospective analysis were just "CV padding". You mentioned a "manuscript" and dealing with all confounders - this isn't what you are doing with a poster and abstract, so again, I will apologize if you actually know a lot about a lot, but your words didn't make it sound like you did.
Neither do yours...it's pretty basic to know you can't do something about all confounders without randomization, so nope, I wasn't suggesting all, the idea is to thoughtfully minimize and account for confounders in an observational study. It's pretty clear where your stats understanding is based on this back and forth. Just be careful what you can make yourself believe.

At this point all you can really do is accuse me of lying when I say I know when I'm out of my statistical depth, and I am not a statistician and have used them with those papers I have published. You simply don't need a statistician for these single question chart reviews for poster presentations.
Where did I accuse you of lying? Several times in the thread you've made things up, and that's another one. I believe that you genuinely think you know your limits with stats, but this isn't necessarily congruent with reality. One question or twenty doesn't determine whether you need a statistician, it's the type of question and the purpose and the kind of data you're working with and your experience. You haven't offered one piece of support for why you'd be an adequate judge aside from merely saying this is the case.
 
Again, the danger in both statistics and medicine is what we don't know; physicians are almost never qualified to adequately speak on the former but frequently do so as if they are. I think a half-way clarified answer isn't the issue-- if you read my post, I said jd71 didn't even flesh out a research question...so it's not a research question that's half way answered, it's just an ill-defined question that doesn't seem worth answering in it's form. And I'd argue that the best thing is to sink time into one valuable question to investigate thoroughly rather than a bunch half-assedly that you never go back to finish just because you wanted to pad your CV.
The perfect is the enemy of the good. Neither of the people you're arguing with discount the importance of statistics, I think you just underestimate the difficulty of getting their input. Yes, ideally you would get a statistician to approve what you're doing, but I'm at a big fancy academic institution and *I* have trouble getting time with a stats person for a project where I *KNOW* I need their help. If the OP waits to do everything "by the book" they will literally get nothing done.

At the OP's stage in his/her career, there's nothing wrong with a little bit of CV padding. Again, the POINT is to show effort and an ability to be productive so that they can get the fellowship they want. It's literally the only reason programs have a "residency research day" where people can trot out their posters to say they have something to show for the effort they've put in, even though we all know there's only a handful that will actually have much of an impact on anything. If you have a problem with the game that's fine, but that's irrelevant to the OP's situation. The OP certainly needs to play the game, and can do so even at a less academic residency program.
 
Last edited:
The perfect is the enemy of the good. Neither of the people you're arguing with discount the importance of statistics, I think you just underestimate the difficulty of getting their input. Yes, ideally you would get a statistician to approve what you're doing, but I'm at a big fancy academic institution and *I* have trouble getting time with a stats person for a project where I *KNOW* I need their help. If the OP waits to do everything "by the book" they will literally get nothing done.

At the OP's stage in his/her career, there's nothing wrong with a little bit of CV padding. Again, the POINT is to show effort and an ability to be productive so that they can get the fellowship they want. It's literally the only reason programs have a "residency research day" where people can trot out their posters to say they have something to show for the effort they've put in, even though we all know there's only a handful that will actually have much of an impact on anything. If you have a problem with the game that's fine, but that's irrelevant to the OP's situation. The OP certainly needs to play the game, and can do so even at a less academic residency program.
I'm not talking perfectly by the book, but physicians aren't even in the same section of the library let alone the correct book when it comes to doing stats (and I'm not saying they need to be, but should then get help). If you're at a big program, there may be more volume for them to deal with, they may not do as much consulting with other departments, or they may not be a large stats program. Either way, it sounds like the program has a supply issue and should hire more statisticians rather than compromising the projects you and your colleagues are developing.

I think that it's very likely one of the folks above was using the "I've published so I'm good at stats" argument to suggest he's a reasonable judge of needing a statistician or what qualifies as good statistical practice (but this is why he kept dodging the specific questions about it).

I do have a problem with the game as it's definitely caused a huge cluster-f in research and causes people to think they're more qualified as a one-stop-shop than they are in reality. It's a large reason why many people think statistics is basically number crunching or a deception tool when it's really something that's just misused by uninformed researchers or it's deliberately used to deceive as we have seen in some prominent researchers who get their stuff retracted left and right once the lid is blown off. Padding the CV would be okay if the research was more sound (i.e. quick and high quality projects), but often, and demonstrably, it isn't. It's also bizarre that people consider quantity of line items to reflect more effort than the actual scope of a project (like one larger project that requires more time and effort).
 
Where did I accuse you of lying? Several times in the thread you've made things up, and that's another one. I believe that you genuinely think you know your limits with stats, but this isn't necessarily congruent with reality. One question or twenty doesn't determine whether you need a statistician, it's the type of question and the purpose and the kind of data you're working with and your experience. You haven't offered one piece of support for why you'd be an adequate judge aside from merely saying this is the case.

I've made up nothing. I didn't say, you said I lied. If you read carefully you will see that I said all you can do is accuse me of lying (or not knowing what I'm talking about), which is what you are doing.

Moderator has asked us (you) to kindly stop, which I had not seen when I responded to you, but you HAD clearly seen before replying to me again. You can PM me if you really want to.
 
I'm not talking perfectly by the book, but physicians aren't even in the same section of the library let alone the correct book when it comes to doing stats (and I'm not saying they need to be, but should then get help). If you're at a big program, there may be more volume for them to deal with, they may not do as much consulting with other departments, or they may not be a large stats program. Either way, it sounds like the program has a supply issue and should hire more statisticians rather than compromising the projects you and your colleagues are developing.

I think that it's very likely one of the folks above was using the "I've published so I'm good at stats" argument to suggest he's a reasonable judge of needing a statistician or what qualifies as good statistical practice (but this is why he kept dodging the specific questions about it).

I do have a problem with the game as it's definitely caused a huge cluster-f in research and causes people to think they're more qualified as a one-stop-shop than they are in reality. It's a large reason why many people think statistics is basically doo doo or a deception tool when it's really something that's just misused by uninformed researchers or it's deliberately used to deceive as we have seen in some prominent researchers who get their stuff retracted left and right once the lid is blown off. Padding the CV would be okay if the research was more sound (i.e. quick and high quality projects), but often, and demonstrably, it isn't. It's also bizarre that people consider quantity of line items to reflect more effort than the actual scope of a project (like one larger project that requires more time and effort).
I'm not arguing with you, it's just what you're saying isn't realistic or helpful to the OP. Shaking my fist at the sky isn't going to make my place hire more statisticians. And you seem to still be missing the point that the at the OP's level, he's "playing the game" not to nefariously get himself promoted up the academic food chain but to get a fellowship that he won't get if he refuses to play the game.

I'm done debating semantics around a case where I think we fundamentally agree. Have a nice day.
 
I've made up nothing. I didn't say, you said I lied. If you read carefully you will see that I said all you can do is accuse me of lying (or not knowing what I'm talking about), which is what you are doing.
I'm not accusing you of lying; I was asking for some clarification because most MDs don't have that skill set despite their claims to the contrary.

Moderator has asked us (you) to kindly stop, which I had not seen when I responded to you, but you HAD clearly seen before replying to me again. You can PM me if you really want to.
No need for parenthetical add ins, the mod clearly didn't single either one of us out (perhaps being polite, or just because it takes two to tango, could be either). Maybe you missed the mod's request the first time, but here you are after now acknowledging it 🤣 too easy to get a response from someone who's clearly hanging the hat of pride on the SDN scoreboard :claps:
 
I'm not arguing with you, it's just what you're saying isn't realistic or helpful to the OP. Shaking my fist at the sky isn't going to make my place hire more statisticians. And you seem to still be missing the point that the at the OP's level, he's "playing the game" not to nefariously get himself promoted up the academic food chain but to get a fellowship that he won't get if he refuses to play the game.

I'm done debating semantics around a case where I think we fundamentally agree. Have a nice day.
I didn't consider us to be arguing because we are on the same page but I think things get lost over the internet. I've seen PIs successfully request the department to increase statistical support and it happens frequently, so in those cases it really wasn't shaking a fist at the sky. I'm not missing the point about the OP, but playing the game at any level perpetuates the problem. It creates bad habits early on, but of course, OP can do what he wants and thinks is best for himself. I was just offering some advice to put a bit more effort in (post 18: [get the right people involved so your work is higher quality as all of ours would be]).
 
I think with statistics that get thrown around all the time about "this amount of students match into their 1st choice, their 2nd choice, etc." many people just assume they are locked into something near the top of their list. This makes it even harder to stomach when you open the envelope only for it to list something you were not expecting at all. Especially when you've picked out the perfect apartments and put the final touches on planning your new life in your top 3 programs.

This is where you need to just take a step back and think about the excitement you felt when you were originally granted an interview at any of the programs on your list. A little while ago you were just an applicant, submitting your application with zero idea of what to expect when interview invites started rolling out. I agree with all of the advice about not putting a program on the list if you really do not see yourself ever there and have enough reservations to risk not having a job the upcoming year just so that you do not have to go there. There are multiple opportunities throughout the residency application process to vet your programs. First, when you are choosing programs to apply to. Then again, when you are choosing which interviews to attend. Lastly, when you are submitting your formal rank list. Chances are, if this program made it through all three of these you either at some point saw yourself training there or you did not apply broadly enough and had limited options from your choices made when sending applications out.

Luckily, most of us are relayed this information and might leave a few places off the list if we have that luxury. With this being said, we should all be happy that we have jobs whether it was our first or last choice. It is only temporary and when you are finished with training, you have the freedom to be more selective with where you begin your life as an attending. Ultimately, you will get out of your training what you put in regardless of where you end up and that is especially true with all the learning resources we have at our fingertips in today's world.
 
It is only temporary and when you are finished with training, you have the freedom to be more selective with where you begin your life as an attending. Ultimately, you will get out of your training what you put in regardless of where you end up and that is especially true with all the learning resources we have at our fingertips in today's world.
I think there is something to be said, however, that while the training and location may be temporary, the names follow you forever. Depending what and where you want to practice, the pedigree matters either to the people hiring you into a practice, an academic institution, or as their physician. There are cases where this doesn't matter, but the pedigree follows you throughout your life; if we've never met, the only indication I have about you and your past is your pedigree.

I agree with a lot of your post that we should be grateful, but that doesn't mean people shouldn't be bummed out when things don't go as planned. Allow time for a readjustment, and then get back to kicking butt--otherwise, there's an issue.
 
Top