How accurate and useful are the percentiles?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

pearlywhites32

Full Member
10+ Year Member
Joined
Jun 9, 2011
Messages
17
Reaction score
0
I just took my DAT and received a 21AA, which had a 97.2 percentile. Is this number (the percentile) going to remain constant or will it change over time? Do dental schools actually see the percentile?
 
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!
 
the schools don't see the percentile for your test. Its more like for you own purposes. I honestly don't know why they even bother to show them as it'll cause people to get upset. Some will feel upset that they got a much harder version.

the schools though have a good idea of the percentiles are though. even if they don't the info is freely available for them to look it up.
 
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!

You have it backwards. If he had an easier test, there would be more people getting that same 21, and thus the percentile would be lower, not higher. He scored in the top 97.2% and only got a 21 vs a 22. This means that he had a harder test.

the schools don't see the percentile for your test. Its more like for you own purposes. I honestly don't know why they even bother to show them as it'll cause people to get upset. Some will feel upset that they got a much harder version.

the schools though have a good idea of the percentiles are though. even if they don't the info is freely available for them to look it up.

The problem is that different versions of the test so each person can have a different percentile with the same AA. I am not sure how a DS could look up someone's DAT percentiles... Also, I like having the percentiles there. It makes for a much easier comparison of scores.
 
You have it backwards. If he had an easier test, there would be more people getting that same 21, and thus the percentile would be lower, not higher. He scored in the top 97.2% and only got a 21 vs a 22. This means that he had a harder test.

You're right, I was looking at it from this standpoint:

A: 22 score = 99 percentile
B: 22 score = 90 percentile

In A, 99 percentile is a top 1% score, which means only 1% scored 22 or better.

In B, 90 percentile is a top 10% score, which means 10% scored 22 or better.

As for the OP, he/she would have scored probably 22 if it were an easier version.
 
You have it backwards. If he had an easier test, there would be more people getting that same 21, and thus the percentile would be lower, not higher. He scored in the top 97.2% and only got a 21 vs a 22. This means that he had a harder test.

No. YOUUUU have it backwards.

Might be hard to believe, but if 2 people receive the same score, the higher the percentile, the easier the test was. Standardizing adjusts for test difficulty with respect to the other test versions.

Let's say person A takes a community college course and got an A and beat 95% of their classmates.

Then person B take a university course, got an B+, and beat 80% of their classmates.

So one, you are in the 95th percentile, but it's easier.
The other, you are in the 80th percentile, but it's harder.

When you standardize it, the numbers will come closer together, with it benefiting person B more.
 
The problem is that different versions of the test so each person can have a different percentile with the same AA. I am not sure how a DS could look up someone's DAT percentiles... Also, I like having the percentiles there. It makes for a much easier comparison of scores.

I never said they'd get an exact percentile just a rough idea.

Lets say you have a 23 AA, if there 13K test takers, only 1300 have 23 or higher. you can estimate that person is in the 90 percentile.

I'm sure though with years of applicants they don't even need to bother estimate with stats. They know it. Its their job after all. some numbers make it easier for them. like 27+
 
No. YOUUUU have it backwards.

Might be hard to believe, but if 2 people receive the same score, the higher the percentile, the easier the test was. Standardizing adjusts for test difficulty with respect to the other test versions.

Let's say person A takes a community college course and got an A and beat 95% of their classmates.

Then person B take a university course, got an B+, and beat 80% of their classmates.

So one, you are in the 95th percentile, but it's easier.
The other, you are in the 80th percentile, but it's harder.

When you standardize it, the numbers will come closer together, with it benefiting person B more.

Hmmmm, OK. The only thing that is confusing me is that there is only one testing pool, whereas your example has two different testing pools (the two separate schools). The reason I bring this up is that it seems to me that if he was placed into the 97th percentile, I was under the impression that this meant he was in the 97th percentile of all DAT test takers that year. Furthermore, the distribution of DAT test takers is set, whereas the "difficulty" of each test is unique. However, if the same test pool is used, the percentile rank would indicate your performance relative the the test pool regardless of the difficulty of the test taken (hence the standardized test). I am under the impression that the higher your percentile, the higher you are in the test pool. Am I incorrect here? I guess it also depends on how what algorithm is used to determine the standard score... Thanks the insight though 👍


I never said they'd get an exact percentile just a rough idea.

Lets say you have a 23 AA, if there 13K test takers, only 1300 have 23 or higher. you can estimate that person is in the 90 percentile.

I'm sure though with years of applicants they don't even need to bother estimate with stats. They know it. Its their job after all. some numbers make it easier for them. like 27+

IC what you mean now. Yes, I agree that schools would know the general trends of the scoring system lol 😀
 
No. YOUUUU have it backwards.

Might be hard to believe, but if 2 people receive the same score, the higher the percentile, the easier the test was. Standardizing adjusts for test difficulty with respect to the other test versions.

Let's say person A takes a community college course and got an A and beat 95% of their classmates.

Then person B take a university course, got an B+, and beat 80% of their classmates.

So one, you are in the 95th percentile, but it's easier.
The other, you are in the 80th percentile, but it's harder.

When you standardize it, the numbers will come closer together, with it benefiting person B more.
i don't know why but after a long day of studying, that was incredibly interesting and refreshing lol. no sarcasm. My thinking was along the lines of Bereno's as well. Interesting perspective.
 
your percentile is the fraction of people taking the same version of the test as you that you did better than. don't know if all of the questions on a given version are the same or if this is computed based on the sum-total difficulty of the particular set of questions you got, but point is that this number is specific to what you saw on the test.

the AA is standardized across all test-takers.

I know that the percentile is regarding the specific exam that you took. However, the applicant pool is still the same. Therefore regardless of the difficulty, it would seem that if he got a 97.2 percentile, he would be in the top 2.8% of those that took that test. Assuming the test pool is large enough, this would be representative of the DAT population. I'm just curious if someone has any auxiliary information that could shed some light on the algorithm used to determine the standard score... 😀

Here is my thinking:

Test A: 21 93rd percentile
Test B: 21 97th percentile

Test B would be more difficult because out of the SAME pool of test takers, only 3% were able to pull a 21 out of the test, whereas on test A, 7% were able to pull a 21. The real assumption here is that the test group for A and the test group for B would be effectively identical (statistically speaking) because they come from the same pool of test takers.

however, this train of though could be debunked depending on how the standard score of 21 is determined...
 
I know that the percentile is regarding the specific exam that you took. However, the applicant pool is still the same. Therefore regardless of the difficulty, it would seem that if he got a 97.2 percentile, he would be in the top 2.8% of those that took that test. Assuming the test pool is large enough, this would be representative of the DAT population. I'm just curious if someone has any auxiliary information that could shed some light on the algorithm used to determine the standard score... 😀

Here is my thinking:

Test A: 21 93rd percentile
Test B: 21 97th percentile

Test B would be more difficult because out of the SAME pool of test takers, only 3% were able to pull a 21 out of the test, whereas on test A, 7% were able to pull a 21. The real assumption here is that the test group for A and the test group for B would be effectively identical (statistically speaking) because they come from the same pool of test takers.

however, this train of though could be debunked depending on how the standard score of 21 is determined...

touche.

Here is my thinking:

Test A: 21 93rd percentile
Test B: 21 97th percentile

Test A would be more difficult because, despite performing better than only 93% of the test-taking population, student A still received a standardized 21...meaning that because the test version was harder, the standardizing process gave him/her a slight bump-up to compensate.

Test B would be easier because, conversely, despite performing better than 97% of the test-taking population, the standardizing process gave student B a slight bump-down to compensate for the easier test version.

Where we differ is that you think percentiles are based on standardized score (which is true in the end of year reports, where we don't see the variance we see from breakdown to breakdown because only one value is reported), whereas I think percentiles on individual scores reports are based on raw scores across all test-takers, not version-specific as I thought before you pointed this out (and hence the variance we see in breakdowns prior to standardization).

As you aptly pointed out, the applicant pools are the same (or equivalent) across test versions. The fact that we see percentile variance from individual breakdown to breakdown is good support in favor of my argument and against yours.
 
Last edited:
From the breakdowns I've seen this year, there were some posters who described my exact feelings of the test, and when I looked at their percentiles, it matched mine, so I have theory that each test version (from the sciences all the way to the QR) are the same.

In the DAT User manual, it stated that only 75 of the 90 PAT questions were marked, while the 15 were used to gauged their difficulty for future test takers. It didn't state if they did this with the other questions, but I wouldn't be surprised if they did.

So to measure the 'test difficulty', it would be the summation of 'question difficulty', which were determined from past test takers. The 'test difficulty' would help them with the standardization of the test compared to other versions of the tests. How they do the standardization is top secret.. probably to avoid criticism from the majority of predents who know jack **** about probabilities lol.
 
touche.

Here is my thinking:

Test A: 21 93rd percentile
Test B: 21 97th percentile

Test A would be more difficult because, despite performing better than only 93% of the test-taking population, student A still received a standardized 21...meaning that because the test version was harder, the standardizing process gave him/her a slight bump-up to compensate.

Test B would be easier because, conversely, despite performing better than 97% of the test-taking population, the standardizing process gave student B a slight bump-down to compensate for the easier test version.

Where we differ is that you think percentiles are based on standardized score (which is true in the end of year reports, where we don't see the variance we see from breakdown to breakdown because only one value is reported), whereas I think percentiles on individual scores reports are based on raw scores across all test-takers, not version-specific as I thought before you pointed this out (and hence the variance we see in breakdowns prior to standardization).

As you aptly pointed out, the applicant pools are the same (or equivalent) across test versions. The fact that we see percentile variance from individual breakdown to breakdown is good support in favor of my argument and against yours.

I do not think percentiles are based on standard score. Quite the opposite actually. I am under the impression that percentiles are based on the relative number of students that got X number of questions right on that particular test out of 100 test students. As I mentioned earlier in my previous posts, I think the debate stems from how a standard score is administered.

Here is my thinking again:

Test A: 21 93rd percentile
Test B: 21 97th percentile

IF a 21 is determined solely by the number of questions correct for that particular test (ie 80% correct is a 20, 82% correct is a 21 etc), then test B is more difficult. The reason for this is that only 3% would have earned a 21 or higher. This could also work if some sort of standard distribution was established to determine the mean and standard deviation of the test in question prior to implementation of the test. (my argument)

IF a 21 is based on an unknown algorithm that attempts to determine the "difficulty" of a test before it has been taken. Then there is NO WAY to determine the difficulty of a test solely on the percentiles of the test.

The real stickler for me is that a percentile is the ONLY common ground for comparison among the DAT test takers. I think we can agree that a 21 on test A is not the same as a 21 on test B (though we would like them to be). It would be nice to think that a standard score on the DAT are all equivalent to their respective scores on each test, but I don't think that is the case. I would argue that a someone in the 95th percentile is equivalent to someone who is also in the 95th percentile regardless of their standard score. This is because they come from the same applicant pool and therefore are stratified accordingly. It seems rather analogous to a mean and standard deviation problem. You can have two different means on two different tests, but if you score in the top 3 % on either one, you are still in the top 3% regardless of what your standard score on the test was...

Interesting way to look at it though! I like the conversation 🙂 👍

From the breakdowns I've seen this year, there were some posters who described my exact feelings of the test, and when I looked at their percentiles, it matched mine, so I have theory that each test version (from the sciences all the way to the QR) are the same.

In the DAT User manual, it stated that only 75 of the 90 PAT questions were marked, while the 15 were used to gauged their difficulty for future test takers. It didn't state if they did this with the other questions, but I wouldn't be surprised if they did.

So to measure the 'test difficulty', it would be the summation of 'question difficulty', which were determined from past test takers. The 'test difficulty' would help them with the standardization of the test compared to other versions of the tests. How they do the standardization is top secret.. probably to avoid criticism from the majority of predents who know jack **** about probabilities lol.

I think that they try to determine the difficulty prior to implementation and consequently determine what raw score will be what standard score based off historical performance based off what you said. Good point! 😀
 
Last edited:
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!

Actually, it was the harder version. A 97 percentile at 21 means that 3% of test takers get above a 21. If it was at 22, then 3% of testers got over 22 and more got over 21.

Oops, I didn't see all the comments following. Sorry about the redundant post.
 
Last edited:
I just took my DAT and received a 21AA, which had a 97.2 percentile. Is this number (the percentile) going to remain constant or will it change over time? Do dental schools actually see the percentile?
we might have had the SAME exact test version...
my 21 was 97.2 (If I remember correctly)

The percentiles are only based on exam versions. Most 21s I seen are in the 91-93% range.

And no, schools do not have access to percentiles, only the raw scores
 
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!

its the other way around.....
97% on a 21AA is reflective of a "hard" test version. Think about it, only 3% of this test's population scored above 21AA....

On the other hand, since most 21s are in the 91-93% range, that means, for many test versions, 7-9% of that test's population are scoring above 21AA
 
This makes me so mad. My 20 was 93.4 percentile. If I had gotten an easier test version maybe I could've broke 21 which is all D-schools see. They have no clue how hard or easy the test taker's version was ;/.
 
This makes me so mad. My 20 was 93.4 percentile. If I had gotten an easier test version maybe I could've broke 21 which is all D-schools see. They have no clue how hard or easy the test taker's version was ;/.

this is why I think showing the percentiles is lame on the ADA's part. It undoubtedly brings up feelings of being unfair.

even though if you think about if someone got an easier version they likely would have had to get more correct answers to earn a standard score of 20 compared to someone with a harder version. 1 wrong question on an easier version might drop them from 28 to 24 or something.
 
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!

Idk what you're talking about because a 97th Percentile 21 means it was harder than a 97th percentile 22
 
we might have had the SAME exact test version...
my 21 was 97.2 (If I remember correctly)

The percentiles are only based on exam versions. Most 21s I seen are in the 91-93% range.

And no, schools do not have access to percentiles, only the raw scores

Yes! We had the same exact test. (mine was also a 97.2) I'm pretty annoyed now haha That was a hard test and we could've gotten a bit higher on an easier version.
 
Yes! We had the same exact test. (mine was also a 97.2) I'm pretty annoyed now haha That was a hard test and we could've gotten a bit higher on an easier version.

it won't make that much difference. had it been a different test version, you might have scored a 22 or maybe even 23..... Difference between 21 vs 22 vs 23 isn't that much, they are all high scores.
 
Luck for you. You had a little easier version. I remember some people needing to score 22 in order to achieve 97 percentile. Good thing is that schools don't see the percentiles. I hope I get your version too lol!

that means the version was harder...less people get 21 or higher on that version...
 
This makes me so mad. My 20 was 93.4 percentile. If I had gotten an easier test version maybe I could've broke 21 which is all D-schools see. They have no clue how hard or easy the test taker's version was ;/.

You do not have to worry about how hard your particular test is, it does not make a difference. The scores are standardized, and they have put allot of effort into the standardization of the scores (psychometrics). This is my basic understanding of what they do: All the test questions that are used have been pretested, and they have collected statistical data on each so they know how hard the questions are. This way they can design tests of equal difficulty. After the test is approved and actual students take the test form, they collect more data on it to see how normal the test actually is (the equating questions). Based on this data, they standardize your score so everyone is on the same measurement scale. So the standard score that you receive is all the information a dental school needs, they don't need to look at percentiles.



"Each test includes equating and pretest questions. The purpose of the equating questions is to form a link among collections of items, so that examinee's standard scores can be placed on the same measurement scale. Because of these equating questions, examinee's scores have the same meaning regardless of the test they were administered. Unscored pretest questions are included on the test in order to gather information. This information is used in the test construction process to insure that these questions are appropriate before they are included among the scored items. "
Source: DENTAL ADMISSION TEST (DAT) 2011 PROGRAM GUIDE

"Pre-equating is a statistical method used to adjust for minor fluctuations in the difficulty of different test forms so that a test taker is neither advantaged nor disadvantaged by the particular form that is given"
Source: Policies and Procedures Governing Challenges to Law School Admission Test Questions

"Although a great deal of effort is placed on assembling comparable tests, forms will tend to vary somewhat in terms of their statistical characteristics. Hence, scores must be transformed in order to enable direct comparisons across forms. The process by which scores are adjusted so as to make them comparable to each other is referred to as equating. The Law School Admission Council (LSAC) employs item response theory (IRT) true-score equating to equate the LSAT."
Source: Assessing the Effect of Multidimensionality on LSAT Equating for Subgroups of Test Takers
 
Last edited:
You do not have to worry about how hard your particular test is, it does not make a difference. The scores are standardized, and they have put allot of effort into the standardization of the scores (psychometrics). This is my basic understanding of what they do: All the test questions that are used have been pretested, and they have collected statistical data on each so they know how hard the questions are. This way they can design tests of equal difficulty. After the test is approved and actual students take the test form, they collect more data on it to see how normal the test actually is (the equating questions). Based on this data, they standardize your score so everyone is on the same measurement scale. So the standard score that you receive is all the information a dental school needs, they don’t need to look at percentiles.

Stop bringing reasoning and evidence into this discussion.
I already sang the same song but people refuse to believe it.

A 97 percentile 23 should be considered the same as a 97 percentile 21.
Darn ADA and standardization. 😛
 
Stop bringing reasoning and evidence into this discussion.
I already sang the same song but people refuse to believe it.

A 97 percentile 23 should be considered the same as a 97 percentile 21.
Darn ADA and standardization. 😛

What is the percentile referring to does anyone know? Lets say its referring to other students who took the same test form. So take two forms, one given in January and one given in May. For some obscure reason, january test takers just happen to be very good at chemistry, and test takers in may just happen to be very bad at chemistry. One student takes the test in January, scores 23 and is in the 97th percentile. Another student takes the test in May, scores 21, and is in the 97th percentile. Should both students have the same score? No, the student in may was compared against students with lesser ability then the student who took the test in January.

This is why the percentile doesn't matter.
 
Top