They're biased in that they are peer reviewed. These are not objective measures. According to the current knowledge held by the peer reviewer, however accurate or misconstrued they are, the survey is completed. You don't have a panel of randomly chosen individuals who grade a program based on a survey, or a tool, that has been tested for validity and reliability. That's why it's biased.
Even though these individuals may come from different schools and have worked at different institutions, they still hold biases, giving this one school a better ranking just because, or giving this school a lower ranking simply because they heard some "bad" things about it, etc.
If the process were done more like CAPTE accreditation, then these rankings would be more credible. However, doing so would be extremely time consuming, and not really all that necessary.