My understanding is that the scores are not released publicly (oops?)
How coy.
Its worth asking as you travel to different programs (probably after you are accepted) - they have the information and should be willing to share with you. It seems exceptional numbers 'travel through the grapevine', as do programs that get chewed out by the section...
You can get the grant abstracts online at CRISP, that alone will give a nice overview of the program. T32 is institutional training grants, including MSTP.
They are not released publicly, and are not meant to be shared at all (not that it's against any "rules"). The score does not constitute a ranking, merely an acceptability for renewal. In fact, they are explicitly not designed for rankings (it's like ranking PI's in a field based upon their recent R01 score: sure it makes sense for the R01, but for the PI?). One could argue that the features of each programs "R01"/T32 are relatively stable, and conducive to ranking, but that would be misinterpreting the content of the grant renewal, and the process of how the score arises. There are a few people that will show this full document to applicants etc... (like I was shown at U Pitt), but in general it isn't meant for consumption beyond the program administrators/faculty.
There are a few schools that have high enough scores to enable them to extend the grant to 5 years, and there are a few schools that have high enough scores in certain areas to have the number of MSTP positions increased (only one school - not hopkins, and certainly not harvard - had this increase on the last renewal). The trend in the last cycle has been to hold the number of slots constant, or decrease them due to NIH funding pressures. While there may be small or large programs, there is a maximum number funded slots by the NIH, with the rest supplemented by private funds (deep private funds in the cases of places like WashU).
Like hopkins, a few other programs have told students/applicants "highest score ever", "#1", (Duke comes to mind during my interview season a few years ago) - but they lack context and in some cases, truth.
It's easy enough to make a ranking with arbitrary categories and weights: given the number of factors to be considered while choosing a program for oneself, one could probably make post-hoc adjustments to have on the order of 15 programs #1 (conjecture). I'm sure some of you have done something similar to this in a data-driven quest for the perfect choice. Maybe for controversy's sake we'll do something like this one day.
In a lamentable desire to make have rankings obviate personal decision-making via external validation, the search for MSTP rankings will probably continue as long as they exist.
This might be more helpful in your decision than rankings (it's been a long time since i've read a science paper with 4 bar graphs as their data):
Science 17 February 2006:
Vol. 311. no. 5763, pp. 1005 - 1007
DOI: 10.1126/science.1121629
Prev | Table of Contents | Next
Reports
On Making the Right Choice: The Deliberation-Without-Attention Effect
Ap Dijksterhuis,* Maarten W. Bos, Loran F. Nordgren, Rick B. van Baaren
Contrary to conventional wisdom, it is not always advantageous to engage in thorough conscious deliberation before choosing. On the basis of recent insights into the characteristics of conscious and unconscious thought, we tested the hypothesis that simple choices (such as between different towels or different sets of oven mitts) indeed produce better results after conscious thought, but that choices in complex matters (such as between different houses or different cars) should be left to unconscious thought. Named the "deliberation-without-attention" hypothesis, it was confirmed in four studies on consumer choice, both in the laboratory as well as among actual shoppers, that purchases of complex products were viewed more favorably when decisions had been made in the absence of attentive deliberation.