I agree that the tiers are (somewhat) arbitrarily defined but they do exist in reality and I think you should only make comparisons among schools within roughly the same tier (for example, I would agree that Harvard GPAs should be considered inflated in comparison to Princeton GPAs. However, I would not really agree that Harvard GPAs are inflated compared to [insert top 500 school here that is "grade deflationary"])
Dividing schools strictly into those that have grade inflation or grade deflation is highly flawed because grades at most schools are assigned relative to others in your peer group. An "average" Harvard student (who struggles to be better than his peers at Harvard) may do very well at a lower-tier school - even if that school is "grade deflationary" since he may perform comparatively well to his peers there.
Another thing is that the "style" of questions on exams may very well be different. For example, math/science exams at Harvard/Princeton may have some extra tricky/tough questions to really differentiate the best from the very good (among a group of peers who are likely somewhat intelligent already).
You make a great point that I wish to elaborate upon. (Disclaimer: I'm greatly aware that I'm serving to pointlessly prolong this thread, but felt extra compelled to further this point.) As I have mentioned/posted before, there are many intangibles that come into play when assessing rigor. The designs of the exams themselves seem to get little attention, but they are equally if not more important. Some top schools design their exams (especially for prereqs so as to weed out) to contain fewer questions, essentially ascribing more weight to each individual question. As these questions tend to be quite difficult, it is probable that even a high-achieving, intelligent student will miss at least 1 question. Without divulging too much information, I can report that, in one of the more notorious instances of such occurrence at my alma mater, orgo exams consisted of three long/hard questions, the answers of which built upon each other. Missing one question automatically relegated one to a C. With so many bright students in the class that would likely blow out of the water any exam where more opportunities (in this case, more questions) were given, this was the only way to generate bell curve.
Oftentimes, students who went to lesser ranked institutions retort to OP's points with statements that they, too, have been challenged in their science classes. They may indeed have in terms of getting the right preparation to master the concepts. They likely have had an exam that, in addition to two impossible questions, featured easier/moderate difficulty questions to throw students a bone. In these instances, students have been rewarded for the hard work and time they dedicated. At top schools, this isn't often the case. Unless you've got your sh*** down cold/ are impeccably prepared, you may have studied quite thoroughly but, due to the very design of the exam (i.e. the exam does not play to your strengths), you may end up with a B or worse. Further, this extends to grading policies. At my alma mater, there was no HW grade in most of the prereqs. If there was, it would amount to some 5% of the final grade, which is negligible. You did your homework for your own sake, but you were not getting any credit for the effort you put in. Once again, there may have been similar material in difficulty, yet the grading scheme has screwed you. The C+/B- curving has already been discussed, and that's another factor. My final point is that top schools employ a multitude of both subtle and not-so-subtle techniques to achieve a curve among extremely bright students, so it's truly a pity that such alarming variability in how grades are earned is not accounted for sufficiently, essentially causing otherwise capable students to have to spend time and money in a PB in order to prove their worth.
In summation, the grading style and exam design differences I described above seem to explain many observations. That:
1. There is a dichotomy of students at top schools in terms of GPAs. Those with magna cum laude or above, and those with <=3.2 GPAs. The ones in between are few and far between. The <=3.2 GPAs are capable folks, yet they now need to prove themselves in a PB program. I repeat my response : "I don't think that a 3.2 from a top school necessarily makes an adcom judge you as incapable of handling med school, but when compared with a 3.7 from a state school (ceteris paribus, especially with respect to MCAT), the adcom faces the dilemma of giving the 3.2 applicant the benefit of the doubt based on considerably intangible and not readily quantifiable criteria (selection of classes, difficulty of certain professors, etc.). The sad truth for OP and for many in OP's predicament is that med school admissions is simply not about giving one a leg up; they'd rather you spend an extra year or two solidifying and showcasing your credentials in a PB program than offer you the seat of someone with substantially higher GPA, even though the latter may or may not be as capable as you. While OP is entitled (see what I did there?) to being frustrated at such unfairness, all is not lost since they will probably crush PB and go on to get an MD degree nevertheless. In the end, these extra 1.5 years won't be a huge setback. Best of luck, OP."
2. Given the harsh grading policies at top undergrads, the exceptionally gifted peer group, and the stringent curving, top med schools know that someone who succeeded in the above environment is truly worthy of leading the medical profession. It is no wonder that the top 25 schools are almost entirely filled with students who went to top undergrads.
This is what I've gathered over the years from my own observations and discussions with friends who have attended institutions at both ends of the spectra. Of course, for everything I said there are exceptions, and my theory necessarily makes assumptions, but this is how I interpret (and justify) OP's outrage with the system and his/her subsequent diatribe.