- Joined
- Nov 18, 1999
- Messages
- 2,598
- Reaction score
- 3,466
I saw this very interesting fluff article
http://www.theatlantic.com/business/archive/2014/10/why-new-ideas-fail/381275/
with accompanying link to scientific paper
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2478627
that did a careful analysis of the outcomes of the review process for a seed grant application.
Results, which will perhaps be surprising to no one here who has ever applied for money, were that inter-evaluator reliability was quite poor, and that there was a significant penalty for novelty/departure from existing literature, a penalty that got steeper and steeper the more distant the proposal was from themes covered in existing work. The closer the reviewer's own work was to the area of the proposal, the greater the novelty penalty they imposed.
Is there a better way?
Or is this an unremediable result of human nature, and of the inherent impossibility of accurately evaluating proposals that aim to push forward the bounds of existing knowledge?
http://www.theatlantic.com/business/archive/2014/10/why-new-ideas-fail/381275/
with accompanying link to scientific paper
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2478627
that did a careful analysis of the outcomes of the review process for a seed grant application.
Results, which will perhaps be surprising to no one here who has ever applied for money, were that inter-evaluator reliability was quite poor, and that there was a significant penalty for novelty/departure from existing literature, a penalty that got steeper and steeper the more distant the proposal was from themes covered in existing work. The closer the reviewer's own work was to the area of the proposal, the greater the novelty penalty they imposed.
Is there a better way?
Or is this an unremediable result of human nature, and of the inherent impossibility of accurately evaluating proposals that aim to push forward the bounds of existing knowledge?