If I perform average clinically, what will be my options as a mid-tier MSTP grad applying to PSTPs?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Impact factors are a metric of publishing review and epidemiology articles, ie they mostly a sham.

The most successful person in my field (whose research fundamentally changed the focus of the NIH research branch) that paper was published in a journal with a IF of 2.

Members don't see this ad.
 
  • Like
Reactions: 5 users
Impact factors are a metric of publishing review and epidemiology articles, ie they mostly a sham.

The most successful person in my field (whose research fundamentally changed the focus of the NIH research branch) that paper was published in a journal with a IF of 2.
While I totally agree, this feels a whole lot like when someone cries out, "BMI doesn't account for muscle!" True, but for every beefcake with a 28 BMI and 8% body fat who strolls into some family practice in suburban America there are 99 middle-aged dad-bods who absolutely need to cut the cake out of their diet. For quick judgements, higher IF works just fine. On a case-by-case basis, we can pick out the beefcakes and skip the motivational interviewing.

In general, the higher IF papers that came out of the lab were simply better papers and better projects. It takes far too many words to account for every nuance when telling an anecdote.
 
  • Like
Reactions: 1 users
While I totally agree, this feels a whole lot like when someone cries out, "BMI doesn't account for muscle!" True, but for every beefcake with a 28 BMI and 8% body fat who strolls into some family practice in suburban America there are 99 middle-aged dad-bods who absolutely need to cut the cake out of their diet. For quick judgements, higher IF works just fine. On a case-by-case basis, we can pick out the beefcakes and skip the motivational interviewing.

In general, the higher IF papers that came out of the lab were simply better papers and better projects. It takes far too many words to account for every nuance when telling an anecdote.
Maybe, but maybe not. I know a journal editor very well. Good dude. Have drinks with him. Dude by all means is very successful. He also publishes and is chief of a journal of IF of 2-3, which is the median IF of all journals. Similarly, I know another dude, not as personally but interacted with them and they seem like a good person. They publish essentially only in journals who IF is in the 50s. One of those articles tested the hypothesis that oxygen prevents hypoxemia. That’s a NEJM article BTW, IF 100+ last I checked. And for many years, the highest IF journal was a cancer epidemiology journal, which consistently had an IF of 200+ and had such innovative articles such as “Cancer stats this year”.

I will also give my own anecdote, even if you don’t like them (though I guess my anecdotes create my own experience). I published a review (ie resume padding article… huzzah!) in some no name journal that no one gave AF about. Then COVID came. Incidentally, that journal published the first COVID guidelines out of Wuhan. The next year, the IF was ~40. That article, which drove the IF was is such broken English, it was basically unreadable. But, it games the system nevertheless. Which going back to the original point, is the point. It’s all a game. The editors and publishers know this. They just try to get enough pawns to participate. The ones that beat their chests not realizing that are the biggest fools of them all.

And don’t get me started on H-indexes and the “super-publishers”.

For better or worst, there is absolutely no metric to prove someone’s actual contribution to science, since it’s all so divided up and incremental. And even the ones we have… just plain suck. You stick around in academics long enough, this truth becomes painfully obvious. That’s why tenure committees tend to stick to the count of first/last author publications. It’s as about as objective as anything else but at least shows you had some stake in the game… maybe.
 
Last edited:
  • Like
Reactions: 1 users
Members don't see this ad :)
Fair enough. I still think the vast majority of what I see in Nature Biotechnology is head and shoulders better than what rolls through Bioconjugate Chemistry or Biotechnology and Bioengineering . If you want to argue about JACS/Angewandte Chemie vs. Nature Chemistry, sure I'll take a solid JACS paper any day. But if we're talking about top tier journals vs dead average ones, high IF journals still provide:

1) ideas that are (typically) more meaningful to the field and researchers outside of the field
2) a higher bar for reproducing data (i.e., reviewers ask experimenters to reproduce the results, often in subtle ways)
3) superior collective post-publication vetting (i.e., far more eyes on the research to reveal potential fraud)

I simply trust these papers more. I've reproduced methods from Cell, Cancer Cell, Nature Biotech, and other similar journals with a high rate of success. I can't say the same for lower tier journals.
That’s why tenure committees tend to stick to the count of first/last author publications. It’s as about as objective as anything else but at least shows you had some stake in the game… maybe.
To your point about tenure committees, that is definitely not true at my PhD alma mater. It was basically a requirement to get at least one last author publication in a prestige journal. As an example, one professor with about 35 papers in 7 years in solid journals (IF ~5-10), about half of those as last author, was denied tenure. The next year a professor with only 10 papers in 8 years was awarded tenure. However, the second professor published 3 last author papers in journals with IF > 40. In the last 15 years, it's been a de facto rule that a paper in a top journal (e.g., CNS, Nature Biotechnology, Cell Stem Cell, etc...) is required for tenure in this department.
 
Fair enough. I still think the vast majority of what I see in Nature Biotechnology is head and shoulders better than what rolls through Bioconjugate Chemistry or Biotechnology and Bioengineering . If you want to argue about JACS/Angewandte Chemie vs. Nature Chemistry, sure I'll take a solid JACS paper any day. But if we're talking about top tier journals vs dead average ones, high IF journals still provide:

1) ideas that are (typically) more meaningful to the field and researchers outside of the field
2) a higher bar for reproducing data (i.e., reviewers ask experimenters to reproduce the results, often in subtle ways)
3) superior collective post-publication vetting (i.e., far more eyes on the research to reveal potential fraud)

I simply trust these papers more. I've reproduced methods from Cell, Cancer Cell, Nature Biotech, and other similar journals with a high rate of success. I can't say the same for lower tier journals.

To your point about tenure committees, that is definitely not true at my PhD alma mater. It was basically a requirement to get at least one last author publication in a prestige journal. As an example, one professor with about 35 papers in 7 years in solid journals (IF ~5-10), about half of those as last author, was denied tenure. The next year a professor with only 10 papers in 8 years was awarded tenure. However, the second professor published 3 last author papers in journals with IF > 40. In the last 15 years, it's been a de facto rule that a paper in a top journal (e.g., CNS, Nature Biotechnology, Cell Stem Cell, etc...) is required for tenure in this department.
Maybe. I find most papers are incremental in knowledge. I would agree that higher impact factor journals tend to publish papers that are more "robust". Not robust in innovation per se, but they generally have more sophisticated techniques and generally provide more experimental associations of mechanism, then lower IF journals where the details in the mechanism/hypothesis aren't as deep. There is usually more people with deeper pockets on higher end journals, they also have some of the higher APC costs, so you get what you pay for. It's all still incremental though. I can't speak to reproducibility, though that is a global problem. Retractions are also on the rise, which is probably a good thing.
-5ZzYU9Qc_VuLq0v0-remCC9GXl4P4AOkhhumkgkQW1tSOv9lKyU9ENKtBnjW1H83TEGgf9UiBmEfD7YJKYb-CBoOzriIDRruC9f6G-GR4k-7yn7xvTdtrTCqWkAbphBKywBIurTsur84gfbx5Ub1ttv0AG_a0sH_54QQzV8R-71tTkhm0Vdzg5noMI_Eg


I don't think higher IF journal are really any less prone to it (see you tenure comment above), but I do think higher IF journal data is more scrutinized. Not necessarily by peer review though, but other scientists. If you ever get bored, you can go to PubPeer - Search publications and join the conversation. and go look for average scientists finding flaws in all sorts of papers (also, make sure your own name doesn't show up in the records). Personally, when it comes to reproducibility, I'm less concerned about the journal itself and more concerned about the origin of the data (though of course, sometimes those things are linked). In some places (eg China), publication is tied to salary (or bonuses) and the higher the IF, the higher the bonus. So while there is always secondary gain in publishing (ie everyone wants to climb the ladder), that has a specific monetary gain that I think muddies the waters quite a bit.

As for tenure, I only know the tenure benchmarks for my institution and another institution, but have a general idea through peers of the benchmarks for about 1/2 dozen other. The only thing I can say is that those benchmarks are vastly different. So much so, that you can't compare academic ranks beyond a single institution for the most part. This creates headaches (and hard feelings) among faculty all the time. "So and so got promoted and I didn't". Those aren't invalid concerns, but the process and the benchmarks are left intentionally opaque. There are also a lot of different tracks these days, as opposed to 30 years ago when it was tenure/non-tenure (or even only tenure). And how these tracks are measured against one another is also opaque. Where I am, the physician who does research and trying to get tenure is directly compared to a researcher who only does research. Even though I do 50% research and do 50% clinical, my benchmark is compared to someone who does 100% research. That's not always true at every institution though and some institutions recognize that comparison isn't directly a apples to apples comparison. So that's all to say, it varies tremendously. Of course, its hard to know if it really matters in the end because tenure typically nets nothing. No one knows or cares that you have tenure. There are no perks associated with it. You know it and maybe your boss knows it and that's about the extent of who knows it and what it gains you. It's as about as productive as chasing a ghost. But that can be said for a lot in academics.
 
The collective environment is more important than the individuals. A lab with 8 brilliant people and 2 "regular smart" people will create an environment where 8 brilliant people are competing and 2 others are rising to the occasion. Everyone will wind up on top publications. Everyone's resume will have a golden hue. However, a lab with 3 brilliant and 7 smart people will create an environment where brilliant alone feels good enough and the merely smart people will have little incentive to try to rise to the occasion. The result is incremental work.

In my experience, high program prestige has a poor PPV for individual researcher brilliance. However, it has an impressive PPV for individual researcher accomplishment.

As an example, we have a postdoc with a PhD from a famous Harvard lab. He's smart, but not brilliant. 3 years into his postdoc he is a middle author on 1 paper. In his 5 year PhD, he co-authored 20 papers, many in top journals, including multiple CNS. Further, my lab had a single year where every new PhD student was top-notch. The whole culture of the lab changed in about 1 year and we pumped out a bunch of IF > 20 papers. Once these students graduated and the mix of students was more like 3:7 instead of 8:2, the overall ambition cratered and the lab was back to publishing voluminous mediocrity in IF 5-10 journals.

That's an interesting observation. When I interviewed for residency, I noticed that the best applicants for the research track residency had many publications in great journals, although they were mostly middle author. I tracked how they did during residency and realized that nearly all did not come close to their productivity as an MD/PhD student. It was a huge drop from the top to average for most. This, your post and this article (Researchers’ Individual Publication Rate Has Not Increased in a Century) make me wonder if these top applicants found a great environment during the MD/PhD, largely by chance, which they could not replicate a second time during residency. It is also probably why the NIH and departments I have come across only really count first and last author publications.
 
  • Like
Reactions: 1 users
That's an interesting observation. When I interviewed for residency, I noticed that the best applicants for the research track residency had many publications in great journals, although they were mostly middle author. I tracked how they did during residency and realized that nearly all did not come close to their productivity as an MD/PhD student. It was a huge drop from the top to average for most. This, your post and this article (Researchers’ Individual Publication Rate Has Not Increased in a Century) make me wonder if these top applicants found a great environment during the MD/PhD, largely by chance, which they could not replicate a second time during residency. It is also probably why the NIH and departments I have come across only really count first and last author publications.
There are soooo many gift authorship these days. I mean, when you have papers with 3 or 4 authors, you can reasonably assume that all the people put some work into it. But these days most publications have like 10+ or even more. When it gets to be that bloated you can’t assume people in the middle did anything more than maybe talk to someone for 30 minutes about the science in the paper or was a lab mate that someone required they put on so they don’t generate hard feelings.

The first/last author metric is basically the only way to reasonably assume that an individual either contributed significantly to the idea, development, planning, do and/or writing of any project and the rest maybe did something, but just as likely just know the main authors in some capacity and probably didn’t provide much, if any, value.
 
  • Like
Reactions: 1 users
There are soooo many gift authorship these days. I mean, when you have papers with 3 or 4 authors, you can reasonably assume that all the people put some work into it. But these days most publications have like 10+ or even more. When it gets to be that bloated you can’t assume people in the middle did anything more than maybe talk to someone for 30 minutes about the science in the paper or was a lab mate that someone required they put on so they don’t generate hard feelings.

The first/last author metric is basically the only way to reasonably assume that an individual either contributed significantly to the idea, development, planning, do and/or writing of any project and the rest maybe did something, but just as likely just know the main authors in some capacity and probably didn’t provide much, if any, value.
I don't know, I think biology is moving into the era of team science. Physics has been there for a long time.
Gift authorships are a thing for sure, but also a high-impact paper often requires a ton of work and may have very substantive contributions from a large number of authors.
Individual labs organized on the fiefdom model are still around, but this model results in a ton of inefficiency and duplicated effort, and presumably the NIH has realized this, hence the push towards broader cross-institutional working groups, open data sharing, etc.

The collective environment is more important than the individuals.
Yes, absolutely. Again, team science. There are very few areas left where a single individual can make a substantive contribution with only their own labor. I don't think it's at all surprising that people's outputs and success are so strongly dependent on their academic environments.

The thing is that it's actually the microenvironment that matters though, not so much the macroenvironment. Departmental support and the competence and collaboration of people in your individual group or area are really meaningful.
Institutional factors as measured at the USNews level are only relevant at all insofar as they increase the statistical odds of having competent people in your microenvironment.
 
  • Like
  • Hmm
Reactions: 3 users
I don't know, I think biology is moving into the era of team science. Physics has been there for a long time.
Gift authorships are a thing for sure, but also a high-impact paper often requires a ton of work and may have very substantive contributions from a large number of authors.
Individual labs organized on the fiefdom model are still around, but this model results in a ton of inefficiency and duplicated effort, and presumably the NIH has realized this, hence the push towards broader cross-institutional working groups, open data sharing, etc.
Maybe. Except R-level RPGs are still by far the most common funding mechanism at the NIH.
 
I don't know, I think biology is moving into the era of team science. Physics has been there for a long time.
Gift authorships are a thing for sure, but also a high-impact paper often requires a ton of work and may have very substantive contributions from a large number of authors.

I think this is a common misconception. Large teams tend to produce incremental work, whereas small teams are more innovative:


Modern scientific teams are mostly comprised of short-term workforce who have narrow scientific experience at best:


This also explains why experimental physics produces incremental gains. The exceptions tend to involve a few theoretical physicists who make breakthrough ideas which is confirmed by a thousand experimentalists (e.g., Higgs boson). IMO, the innovation happened with the few theoretical physicists rather than the experimentalists, but to each his own.
 
I think this is a common misconception. Large teams tend to produce incremental work, whereas small teams are more innovative:


Your link doesn't say that small teams produce better science; it says both types are needed. (Notably, it makes no mention of lone individuals operating outside any kind of team, consistent with my point that this model is too rare to have any detectable signal.)

This is consistent with the rhythm of scientific progress. You don't just have all disruption, all the time; that would be chaos. There are short periods of revolutionary advancement in which a new scientific paradigm supplants the old one, and then long periods of filling in the details using the fruitful structure afforded by the new paradigm. See Kuhn's "Structure of Scientific Revolutions."

Modern scientific teams are mostly comprised of short-term workforce who have narrow scientific experience at best:
The ever more transient nature of the scientific workforce is just one facet of the huge problem of the pyramid scheme structure characteristic of modern science. This has been discussed ad nauseam on this board but fundamentally is a problem of: science is by and large not directly profitable, thus cannot survive in a capitalist economy without artificial support. Said artificial support exists, but is nowhere near enough money to support everyone who wants to be a scientist. No news there.
 
The thing is that it's actually the microenvironment that matters though, not so much the macroenvironment. Departmental support and the competence and collaboration of people in your individual group or area are really meaningful.
Institutional factors as measured at the USNews level are only relevant at all insofar as they increase the statistical odds of having competent people in your microenvironment.
Absolutely. Within my US News T10 department there were some labs that set you up for success and others that condemned you to mediocrity. The lab next to mine rarely graduated anyone without a high profile paper. One lab in the department struggled to purchase basic supplies and the students inevitably exited to industry roles (and usually non-scientist ones) after publishing 1-2 humdrum papers. At my undergrad institution, which was T5 for pretty much everything, nearly all labs set students up for success, but there were still some highly malignant and non-productive environments, and sometimes these environments absolutely imploded on themselves.

I imagine at a lower tier university, say T50, most labs will be humdrum. The people available to you for collaboration will be competent, but often unmotivated and rarely brilliant. The ambition of your PIs will be dampened, either by their own research philosophy or by funding limitations. There will still be some labs that are dynamic work environments creating science worth publishing, and within those labs the opportunities will be just as great as similar labs at a T5 university.
 
OP: You could given some consideration to doing categorical internal medicine, with the idea of possibly transitioning to the physician scientist pathway later. The problem with these formal PSTPs is they lock you in for MSTP two, electric boogaloo and that might not be so hot.
 
Top