Scientific review: largely random, biased against novelty?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Joined
Nov 18, 1999
Messages
2,496
Reaction score
3,028
I saw this very interesting fluff article
http://www.theatlantic.com/business/archive/2014/10/why-new-ideas-fail/381275/
with accompanying link to scientific paper
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2478627

that did a careful analysis of the outcomes of the review process for a seed grant application.

Results, which will perhaps be surprising to no one here who has ever applied for money, were that inter-evaluator reliability was quite poor, and that there was a significant penalty for novelty/departure from existing literature, a penalty that got steeper and steeper the more distant the proposal was from themes covered in existing work. The closer the reviewer's own work was to the area of the proposal, the greater the novelty penalty they imposed.

Is there a better way?
Or is this an unremediable result of human nature, and of the inherent impossibility of accurately evaluating proposals that aim to push forward the bounds of existing knowledge?

Members don't see this ad.
 
  • Like
Reactions: 1 user
Thanks for sharing this!

I'm really interested in this topic (how science actually works, whether it can work better) -- and the consensus in the science ethics group I'm part of right now is that there is perhaps something intrinsically contrary-to-the-health-of-science-research about crushing/excessive competition (for resources, anyway), and the attenuation of novelty was one of the consequences that came up during our talks. I think it came up in a discussion of one of the recent slew of NPR articles about problems in biomedical research, specifically. http://www.npr.org/blogs/health/2014/09/09/345289127/when-scientists-give-up

So one possible solution might be somehow easing competition. But how to do this? There was another npr article discussing this:
http://www.npr.org/blogs/health/201...uggest-a-few-fixes-for-medical-funding-crisis
And the original article by Harold Varmus and other big guys: http://www.pnas.org/content/111/16/5773

Basically, one of the important short-term solutions seems to be more funding, but that's not always sustainable. Varmus et al also suggest changing the rubric currently used to evaluate grant applications (more emphasis on "novelty, long-term objectives") -- but, again, it's hard to take risks on novelty when funding is short.
 
Cool paper. The novelty vs evaluation score regression (Figure 4; the second graph) shows an inverted U shaped relationship. Seems like there is a sweet spot.

The authors seem to suggest that reviewers want to fund novel research, but there appears an inherent bias against uncertainty in human evaluations.

Some things the authors suggest in the discussion:

"1) Priming and coaching of evaluators regarding novel research could heighten awareness (meta-cognition) of the existence and special considerations involved when evaluating novel proposals.

2) A more active description of the issues, and supplementing human evaluator cognition with algorithmic approaches (such as reporting measures of departures from the existing body of research)

3) Unblinded reviews reveal additional information that might somehow be useful in making the difficult assessments required of evaluators. For example, knowledge of the researcher could potentially provide further context for interpreting the merit of a novel proposal."

I personally favor a variant of #2. Have the institutions state how much they really want to invest in uncertain, novel research (percent-wise). Then, have humans rate the proposals for quality, and finally correct for novelty using an algorithmic assessment of departures from the existing body of research
 
Members don't see this ad :)
Fact is, brilliant science is just as creative as brilliant art. Sure, there are the components of rationality, internal consistency, parsimony, mathematical description, etc. (but isn't the art of Botticelli also internally consistent?), but at it's heart brilliant science is about intuition and visualizing the solution many steps away from what's obvious to the rest of us. Consensus does not generate brilliance; consensus brings things down to the mean, the mundane. Instead of assessing whether the operations proposed by the scientist are appropriate towards demonstrating the idea, the reviewers are asking for extensive support in the literature, citations, backup plans if things fail, and trashing the overarching ideas as "overambitious" or "unsupported" or "fantasy." The end result is rejection. The end result is nothing but boring and infinitesimally incremental science.
 
2) A more active description of the issues, and supplementing human evaluator cognition with algorithmic approaches (such as reporting measures of departures from the existing body of research)

I personally favor a variant of #2. Have the institutions state how much they really want to invest in uncertain, novel research (percent-wise). Then, have humans rate the proposals for quality, and finally correct for novelty using an algorithmic assessment of departures from the existing body of research

I like that as well. #1 seems less likely to be successful (you are already supposed to identify novelty as a positive criterion) and #3 is already standard for most evaluations, and comes with its own set of problems.

On the other hand there is a part of me that understands why people are reluctant to commit scarce funds to high-risk projects. It does make, as mercaptovizadeh said, for boring science. But I can understand the impulse.

The thing that really bothers me more is actually the lack of agreement between reviewers. I see this regularly, and honestly I cannot think of a single instance in which three reviewers independently identified the same problem in a paper or proposal. If two out of three pick something out you know you need to change it (but that's quite rare in itself). Lack of consensus seems to be the rule rather than the exception.
Unlike the conservatism that makes people reluctant to fund risky projects, this has no upside or justification that I can see.
 
I see what you are saying. I do agree with you that it is unfortunate that novel research isn't funded well.

Just to put in some arguments in from the other side: On the other hand, finding problems in a pile of already good proposals can be difficult. So the probability of two people finding the same problem can be slim.

Also, high risk does not mean low quality but, if you are a researcher working hard in an area, and then someone comes to you with a proposal from left field, you are probably going to be skeptical. Novelty also means less evidence in the literature, so some of the logic in the proposal may not add up as well as the other proposals = "faults."

The evidence also shows that reviewers don't like really novel proposals, but they also don't like really un-novel proposals either. People tend to funding the middle ground.

I think this problem has been around for a while, by the way. It's probably more prevalent now, but I hear this mentioned a lot in the Noble prize lectures they publish every year. What many people ended up doing was scrambling together bits of money from other (more boring) projects.
 
I like that as well. #1 seems less likely to be successful (you are already supposed to identify novelty as a positive criterion) and #3 is already standard for most evaluations, and comes with its own set of problems.

On the other hand there is a part of me that understands why people are reluctant to commit scarce funds to high-risk projects. It does make, as mercaptovizadeh said, for boring science. But I can understand the impulse.

The thing that really bothers me more is actually the lack of agreement between reviewers. I see this regularly, and honestly I cannot think of a single instance in which three reviewers independently identified the same problem in a paper or proposal. If two out of three pick something out you know you need to change it (but that's quite rare in itself). Lack of consensus seems to be the rule rather than the exception.
Unlike the conservatism that makes people reluctant to fund risky projects, this has no upside or justification that I can see.

How was science practiced in the "glory days" of the 1800s and 1900s? I mean, Newton was someone who will never be rivaled and seemed to come from the background of natural philosophy more so than today's scientists. The way he uses geometry by analogy in Principia to develop calculus is just difficult to fathom.

But the 1800s and 1900s were a very fertile period scientifically with important advances made by many physicists, chemists, engineers, and biologists, often centered at a few institutions, like Cambridge, Oxford, Zurich, Gottingen, Paris, etc. I'm just trying to figure out what was different then from now. Why is our science today no longer as fundamental? Is it that science itself has changed - where there are no longer simple laws and principles that direct the universe but a smattering of incredibly complicated, intricate, and convoluted laws that we can only hope to approximate? Or are today's scientists too captive to what is already known, cross-correlating their ideas with established literature, to actually make something really new? Or was more comprehensive knowledge expected (and feasible) back then than now? I mean, a standard chemistry PhD student nowadays can take a handful of courses, most of them focused on their field of study. Back in 1900 I expect every PhD student would have to have a very strong grasp on all major branches of chemistry.
 
Well, people weren't asking others for as much funding back then. Newton just needed a room and no fancy supplies. Also, scientists used to work alone more. It seems like it was a more free for all. My guess is there were many very bogus, left-field ideas until a select few finally worked out well.

"Gentlemen scientists" made many of the novel scientific discoveries in the 1800s. They are a very eclectic bunch in general, even today's group. The average person would probably think they are weird, if they met them in person. Some of today's are listed here for reference: http://en.wikipedia.org/wiki/Gentleman_scientist#Modern-day_independent_scientists

By the way, if you look on arxiv.org you can find many left-field papers from less well-known gentlemen scientists.

Seems like there is a lot of "I don't care what you think about me. I'm going to do whatever I want." Einstein in the 1900s was one of them. His science was different, and so was his personal life.

Finally there wasn't as much accumulated knowledge back then, so (I think) it was easier to learn multiple fields
 
I think there's no comparison between science as practiced today vs in previous eras. First of all the knowledge and technology explosions have made it totally impracticable to be a loner or generalist in the tradition of Newton or Einstein; as you say, even a hundred years ago it was much more possible to have a grasp of pretty much all the knowledge in a particular scientific field. Not so today.

More relevant to this discussion is the funding situation. In the past you needed to be independently wealthy (or have a wealthy patron) to be a scientist. Government funding has opened science up to people who actually need to work for a living; and while there are still lots of class- and race-based factors that determine who gets enough higher education to be a scientist, I think the situation we have today is a vast improvement over the gentleman-scientist model. The fact that we're even having this discussion of how funding could be better allocated takes for granted the fact that there exists a mechanism by which one can submit one's ideas for funding on the basis of their merit, as judged by other scientists (and not, say, Lorenzo di Medici) . That's fantastic compared to what we had before.
 
I think there's no comparison between science as practiced today vs in previous eras. First of all the knowledge and technology explosions have made it totally impracticable to be a loner or generalist in the tradition of Newton or Einstein; as you say, even a hundred years ago it was much more possible to have a grasp of pretty much all the knowledge in a particular scientific field. Not so today.

More relevant to this discussion is the funding situation. In the past you needed to be independently wealthy (or have a wealthy patron) to be a scientist. Government funding has opened science up to people who actually need to work for a living; and while there are still lots of class- and race-based factors that determine who gets enough higher education to be a scientist, I think the situation we have today is a vast improvement over the gentleman-scientist model. The fact that we're even having this discussion of how funding could be better allocated takes for granted the fact that there exists a mechanism by which one can submit one's ideas for funding on the basis of their merit, as judged by other scientists (and not, say, Lorenzo di Medici) . That's fantastic compared to what we had before.

I agree with you for the most part, except about Lorenzo. I really do think a bright person with no direct technical experience in the field in question may have a better sense for what to support and what not to. I've seen that our presidents are "better educated" by the generation (from people who had only high school education or some college, they are now almost universally educated at Ivy colleges and prestigious law schools) - and I'm not convinced that their political acumen is any better for it. I do not think competing for funding from patronage - whether that be big pharm or wealthy individual donors - would be a bad idea in addition to competing for government funding, and I think scientists are increasingly going to find that that is something they will have to do.
 
Top