Need to whine

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Ollie123

Full Member
15+ Year Member
Joined
Feb 19, 2007
Messages
5,665
Reaction score
4,011
Points
5,556
  1. Psychologist
Sorry all, but just needed to share this with others who may have gone through the process.

After getting a good score on the first submission of my F31, with 2/3 reviews being overwhelmingly positive, a third reviewer clearly just being nitpicky, and a score that was right on the fence, I thought I was in good shape for the resubmission. Found out today I somehow did WORSE on the resubmission. No summary statements yet so I don't know what transpired. Committee had a lot of turnover, so I suspect it just went to different reviewers

Incidentally, my advisor had the same thing happen with an R21. Gotta say the more experience with it I get, the less confidence I have in the peer-review process. Regardless of whether these are good or bad applications, the apparent lack of reliability is worrisome. Especially with funding having gotten so tight, it seems like more and more of the process is left up to luck of the draw.

Anyone up for doing some research on peer review itself?
 
Yeah, I'm not happy. My sponsors were shocked - they both thought this was a slam dunk on the revision given we basically had only minor revisions and I only needed to eek another couple points out to get it funded. Neither had ever seen this happen before except when people refused to make changes or did something controversial.

From what everyone seems to be saying, NIH has gotten increasingly arbitrary. More and more folks here seem to be adapting a scattershot approach (i.e. just submitting lots and lots of applications) since the level of competition has reached a point where scientific merit provides minimal assurance and its as much driven by the response biases of the particular reviewers assigned. One of my sponsors says for R01s, reviewers have started giving all 1s if they want something funded even if unjustified since with paylines < 8% even a couple 2's can mean it won't be funded.

Waiting to see what my actual reviews say...I'm curious if I just got "screwed" or if they actually did find some significant flaws that neither the first committee nor my diss. committee picked up on.
 
NIH has also gotten increasingly uncertain, particularly in an election year. It might not impact your application directly, but it is on everyone's mind.

Awhile back you could get funding even if your proposal wasn't in the top 20 percent at times. Now I have even seen some rejections within 10%. It's tight.

Agreed that the reviewers are arbitrary - although as someone mentioned, close contact with the PO can sometimes mitigate the reviews.

Best of luck in the future with this proposal.
 
I've been in touch with my PO - we know her quite well actually (have about 6 other applications with her across the lab and she's worked with my advisor/sponsor for years). The political aspects (from my understanding) actually play a stronger role after it is scored, at least with the institute we work with. The anonymity of the scientific review somewhat restricts the schmoozing, though it obviously is still a name-game to some extent.

I'm hoping she can provide some insight. She was also very optimistic going into this and was very positive when I discussed the revisions with her. I'm not sure if she was physically present for the meeting, but I'll be curious to see what she has to say. Unfortunately, she likely won't be able to mitigate it being "Not discussed" 🙁 My only hope for this one is that the reviews will be so off-the-charts crazy that we'll be able to convince her to go to bat for the first submission. Looking into other options for it though - it straddles into basic science enough I might be able to spin it into an NSF app.
 
I'm so sorry to hear this - it is so completely frustrating! I'm not sure if it will help to normalize this experience for you, but I have heard some version of this story several times in the last year or so. Or its "cousin" - the well scored, generally favorable review that moves all of 2-3 points.

Program can be helpful, but they are not typically in the room during the review (the SRO is). I would nevertheless call and ask for any insight. Lately, the big issue can be innovation. Even if the science is solid, the lack of innovation (real or perceived) can sink a proposal.

I work in a soft money environment, and even the most senior successful folk are anxious about the funding climate. A lot of people are heading for the job market this fall - which is not good news for people looking to land a faculty position fresh out of internship or postdoc. I predict a lot of assistant professors from medical schools jumping ship this year and next... (I'm considering it myself, even though my research is much easier to do in a medical environment).
 
Well, that's just awesome. Maybe it'll improve by the time I'm ready to apply for jobs.

Sorry to hear, Ollie. I think the peer review process is very flawed.
 
If Ollie can't get grant-funded, I can't help but think we're *all* out of luck there. 🙁 Sorry to hear.

I think the peer-review process can work very well, but it can also have incredible flaws. It's hit or miss.
 
I think the peer-review process can work very well, but it can also have incredible flaws. It's hit or miss.

For all of the time that goes into creating a grant proposal (regardless of it being an F grant or even an R01), it's ridiculous that all you get is a really busy set of reviewers who skim your proposal and then write up comments. They have so many proposals to review and it is just an intense review process for them. So, I am not surprised that there is variability among reviewers. It just seems that if it were possible to have resubmissions reviewed by the same people, it would be ideal. But my understanding is that NIH invites people out to review proposals for them, and it is not necessarily something one does consistently.
 
Weren't there rumblings awhile back at how the NIH (or maybe it was another big funding source?) changed the reviewer pool, and the result had been a much lower % of PSYCH/SOCIAL SCI grants being given? I could have sworn I heard one of our research-only faculty lament about some kind of process/review change that really mucked things up.
 
Weren't there rumblings awhile back at how the NIH (or maybe it was another big funding source?) changed the reviewer pool, and the result had been a much lower % of PSYCH/SOCIAL SCI grants being given? I could have sworn I heard one of our research-only faculty lament about some kind of process/review change that really mucked things up.

I hadn't heard anything about this at NIH(and likely would have). These things definitely vary across institute and even committee so, its entirely possible its true for a particular area. Even if true, my grant was heavily pharmacology/neuroscience so I'm somewhat doubtful this would be a major factor. I sent mine through NIDA (though its reviewed through CSR), but mine is dead in line with work that NIDA loves to fund - with Volkow heading it up they have become heavily focused on neurobiology/imaging and similar basic science work aimed at understanding processes involved in addition.

Others - thanks for the kind words. Its just very frustrating...I was prepared for the possibility that it wouldn't do well on the first submission. I was not expecting that to happen on the second submission. Maybe do better but not "good enough" given low paylines, but NEVER thought it would do worse.

As for peer-review, to me I think its just a matter of "least worse" rather than a good system. I've long believed that one cannot tell much from a single manuscript because of the way peer review works. Its a review of the presentation of the science...not the science itself. I can spin things however I want. As I've mentioned in other threads, I'm obsessive with my analyses and if there is one thing I've learned, its that doing so will almost assuredly decrease your confidence in your results. I can take one aim, analyze it 10 different (perfectly legitimate and acceptable) ways and get wildly discrepant results. We can argue we should specify what we will use a priori but this is not always feasible for some designs/circumstances, and it still leaves a whole lot up to chance since there is rarely a single "right" way to analyze data.

I think some of the major problems with grant reviews are:
1) General committees. Many people on these committees will simply not understand enough to evaluate things beyond the broad conceptual level. This can work for or against you of course, but ideally, I think it would be a little closer to manuscripts where decisions are made from a broader pool based on the individual grant. I realize that is somewhat impractical , but I think its important to keep in mind.
2) Reviewer turnover. This is my best guess right now at what cost me. The way the grant review process works, its like every manuscript revision got send back out for review....but to different reviewers. You have no idea what the results will be, and it increases the noise factor. In an ideal world, we would have objective criteria and this would actually be a good thing (more eyes on it, etc.) but in practice, the noise factors seem to outweigh benefits.
3) Reviewer matters. This is true moreso with funding being tight (as I noted previously), but one reviewer who trends towards the middle of the scale may still have rated your grant more highly than any other grant they've reviewed in the last 5 years, but its enough to kill the application in the current funding climate.
4) Related to #2, lack of objective criteria. I recognize this very, very hard to do, but as psychologists (to be) we are arguably the best at doing this. To much is left to the whims of the reviewers. One person may like the idea, another may not. I'm convinced it is all a dice roll at this point. I've seen people get softy reviews from people who probably like everything. Maybe this is what I got on my first submission, maybe I got particularly harsh reviews this time, I don't know. Regardless, the outcomes of a grant submission should not depend as heavily on who the SRO decides to send it to as I wager it does. I'm also curious if

I'm clearly frustrated with the experience, especially since I was relying on this for a career jumpstart given my thesis simply did not pan out. Trying to sort out my options right now in terms of other grants to apply for, how/if this should change my plans regarding internship, post-doc, where I focus my efforts during my remaining time in grad school, etc. For now - I'm sublimating my frustrations by working on manuscripts. Need to make sure they can't turn down the next grant application!
 
Bump.

Just got my summary statements so thought I would update. There is really not much to it aside from the fact that it went to different reviewers. The two reviewers who LOVED it last time and gave almost no feedback were of course the ones replaced, and the one who hated it was retained.

No methodological flaws or anything of that nature that got caught and all were very positive about it. They just wanted different things from the other reviewers (i.e. review requested I "Incorporate more exposure to clinical issues in the training" so I add this in and new review says "Clinical issues seem distal to the heavily neuroscience-based project"). All other comments were mostly related to "Better explain x and z" rather than substantive changes. Really just decision points that need to be made in terms of what to include given the tight page limits, and we gambled right the first time and wrong the second.

Mentors basically attribute it to noise in the system - had it gone to the same reviewers both times (or even if Version 2 reviewers had come first instead of last) my score could well have been in the mid-teens rather than not discussed.

One final consideration is regarding funding lines themselves. I was "Not discussed" with all 1's, 2's and 3's (except for one five). In past years that would have easily made it into the discussion pool for an F31 so either something weird is going on with the scoring that is causing scores in that range to get pushed to a lower percentile, or NIH CSR shifted towards not discussing a larger percentage of applications given the lower funding lines at present.

Anyways, I'm over it and already have other applications in but wanted to share in case others benefit. One applications done, one more on its way out the door in two days, two more planned for the rest of this year, and a half dozen pubs to get out. NIH will rue the day they decided not to fund me😉
 
Bump.

Just got my summary statements so thought I would update. There is really not much to it aside from the fact that it went to different reviewers. The two reviewers who LOVED it last time and gave almost no feedback were of course the ones replaced, and the one who hated it was retained.

No methodological flaws or anything of that nature that got caught and all were very positive about it. They just wanted different things from the other reviewers (i.e. review requested I "Incorporate more exposure to clinical issues in the training" so I add this in and new review says "Clinical issues seem distal to the heavily neuroscience-based project"). All other comments were mostly related to "Better explain x and z" rather than substantive changes. Really just decision points that need to be made in terms of what to include given the tight page limits, and we gambled right the first time and wrong the second.

Mentors basically attribute it to noise in the system - had it gone to the same reviewers both times (or even if Version 2 reviewers had come first instead of last) my score could well have been in the mid-teens rather than not discussed.

One final consideration is regarding funding lines themselves. I was "Not discussed" with all 1's, 2's and 3's (except for one five). In past years that would have easily made it into the discussion pool for an F31 so either something weird is going on with the scoring that is causing scores in that range to get pushed to a lower percentile, or NIH CSR shifted towards not discussing a larger percentage of applications given the lower funding lines at present.

Anyways, I'm over it and already have other applications in but wanted to share in case others benefit. One applications done, one more on its way out the door in two days, two more planned for the rest of this year, and a half dozen pubs to get out. NIH will rue the day they decided not to fund me😉

Thanks Ollie, this is good to know. Good luck with your other applications.
 
I can totally relate to the frustration! My F31 just went through its first round and was ND. Overall my scores were really good, mostly 1's and 2's but one reviewer was extremely nitpicky and gave it three 5's! And they clearly hadn't fully read my application because they lamented in multiple sections that I didn't have any training in SEM (which is a major part of the analysis) and I took an entire class last fall on SEM (which is listed on my biosketch). So frustrating. I'm hopeful that I can revise and clarify enough to get a better score next time, but it makes me so nervous that I only have one more shot and no guarantee of getting the same reviewers! It just seems so arbitrary... I'm also really annoyed that I just got my summary statement on Wednesday and our university has a one week clearance for grant submissions, so I missed the August 8th deadline and now I have to wait for the December 8th deadline to resubmit.
 
Yeah, its unfortunate. We've had some other oddities recently, and my mentors are wondering if NIH is having more difficulty getting qualified reviewers right now, since the big-name folks are spending more time writing grants given the current funding situation, and likely taking that time away from miscellaneous other duties (i.e. grant reviewing).

One of my mentors (who is PI or Co-I on four R01s and a host of smaller grants right now - so not exactly bitter about the process!) says he is fairly convinced the review system is about 80% error variance at present and its mostly just about luck of the draw on reviewers as long as you cross the threshold into "decent". One of our more successful faculty at the institution I'm at doesn't even really write his own grants anymore - he writes outlines, hands them off to medical writers he hired to do the bulk of the writing, and then looks them over before submitting - I doubt many of the applications are outstanding (for obvious reasons) but he's able to submit a truly obscene number of them relative to others, and some get through. The lesson seems to be that its less important to worry about minor details, and more about just submitting lots of applications to different places and hoping something gets through. Totally not how we were trained to do things here, but c'est la vie.
 
Last edited:
Top Bottom