Psychological Assessment: report standard scores or percentiles?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.

Logic Prevails

Member
10+ Year Member
5+ Year Member
15+ Year Member
Joined
Aug 19, 2005
Messages
237
Reaction score
1
Just thought I'd throw this question out there for those doing assessment work.

I tend to prefer reporting percentiles in a report, because I think they are easier for the 'lay person' to understand and can remain consistent throughout the different test results (as opposed to reporting t-scores, standard scores, etc.). Having said that, I've noticed that others prefer to use standard scores for reporting I.Q. and Achievement, while using percentiles to report scores on behavioral measures.

Is there any 'preferred practice' with regard to reporting test results?

Members don't see this ad.
 
You really have two goals when reporting scores within a report. The first is, as you said, to provide data that can be interpreted by the lay reader. This could be percentiles or descriptions (mild imp, moderate, etc). The second is to include those scores that might be needed as a basis for comparison to previous or future testing. I often include raw, standard, descriptive scores in a summary sheet for neuro evals to aid in comparison. This may not be as meaningful with an MMPI let's say, but the T-scores would be relevant.

My other pet peeve, is when psychologists give multiple measures such achievement, language, IQ and memory and use only the test descriptors in the manuals. Thus, for one instrument they say "low average" for the next "below expectations" and a third "borderline". This does not help the reader understand anything.
 
For better or worse, test administrators and the writers of the evaluations seem to be losing control over what data they want and do not want to report as they rely more and more on computer programs to generate the reports/write-ups. I do not enjoy reading a report entirely done by a computer software package which comes with an assessment device. However, they certainly produce all of the numbers very quickly and in an aesthetically pleasing form. Besides, when one finds him/ herself using so many standard phrases in a write-up which are often cut a pasted from templates, the computer report is just a small step away from the human report.

A second comment is to know for whom the report is being written: school, government agency etc., study, court, etc. Certain places want reports written in specific way with specific data presented. Not presenting the data which the reader needs may delay things proceedings or result in the report being sent back. For example, one place I know definitely wants age-equivalencies, another place wants standard deviations, and a final place want the assessment results discussed in certain sections. Having been on both sides, assessor and reader, I can see the importance of both sides communicating with each other. In fact, from the reader side, I just recently was doing the reading of psychological reports for an agency which spends an enormous amount of money paying outside professionals and agencies to do evaluations for them. The reports which were coming back from one of the outside providers were consistently not providing the information the agency needed from the reports. I brought this to the attention of supervisors who within minutes got back to me thanking me for calling this to their attention and informing me that they were no longer going to give business to that particular outside agency which does assessments for them. In short, know who is paying your bills!
 
Members don't see this ad :)
Great feedback - I like the idea of including a summary sheet; something I already do for myself and for the client file, but for whatever reason, never thought to include as an attachment to a report. I think this would be a good way to supplement the text, which I would like to keep clear (i.e. use just percentiles) and as jargon free as possible. I guess my only worry would be that someone might misinterpret something in a table.
 
Is there any 'preferred practice' with regard to reporting test results?

I've noticed a greater emphasis on attaching raw data in tabular form to reports. There doesn't seem to be a "preferred practice" for presenting data within the body of the report.

I use percentiles within the report body only to emphasize findings. I attach a very comprehensive datasheet to all reports. The datasheet contains a large amount of information and is not intended for the non-psychologist to interpret on their own. I typically do not use descriptors on the datasheet. My goal is to produce a report that the patient and referral source can easily understand and that also contains all of the necessary information for the next psychologist (or me in a follow-up assessment). I think about the datasheet as being equivalent to the technical data presented in an EEG or bloodwork report.
 
What about when someone scores FSIQ 155 on the WAIS. Are you going to write that percentile or just 99.9%+?

Write the IQ and then the percentile.
 
I use percentiles within the report body only to emphasize findings. I attach a very comprehensive datasheet to all reports. The datasheet contains a large amount of information and is not intended for the non-psychologist to interpret on their own.

That's what I would do.

It provides the necessary information, while providing the raw data for a trained professional to look at if they so choose.

As for the IQ 155 thing, I don't think a % would hurt, though most will go:

"Oh...155, s/he's smart" and won't really understand what that means, which is why it is important to point out relative strengths, etc.

-t
 
Just as a cautionary note, as this speaks to writing for your audience. Many people do not know what a percentile is (too often confused with percentage), especially when percentiles are reported as 99% (which is incorrect).
 
Top