Just finished reading an article on the poor efficacy of PTSD RRTP programs in VA in terms of PCL-5 reduction pre-post-4 mo. f/u.
It is entertaining to watch them ignore obvious variables (to front line clinicians) in the Discussion section write-up. They twisted themselves into knots trying to explain away the fact that veterans completing EBP oriented RRTPs fared no better than those who received no EBP component...and they all performed horribly. In the philosophy of science, it is understood that adherents of a particular position/theory will raise multiple post hoc"protective belts" of hypotheses to explain away empirical findings that seriously call into question their pet theory. It was happenin' here.
Does anyone else have the experience reading articles like this and thinking that the authors are ignoring elephants in the room? Should there be requirements that at least one article reviewer be a currently-practicing full-time VA clinician?
There was another study (I think it was with PTSD veterans going through PE/CPT who had been given the MMPI-2-RF) that found that veterans with significant elevations on the infrequency (F-r) and infrequency psychopathology (Fp-r) scales tended to actually attend more sessions while being rated by their protocol therapists as being significantly LESS engaged / compliant in session. My memory is a bit fuzzy, but I think that was one of the 'puzzling' findings.
Does that finding surprise any VA practitioners out there?
Edit: additional info from the study: so, when analyzing pre-post and 4 mo follow up outcome in PTSD sxs, they didn't focus on the whole sample but, rather, split the sample into three categories/groups for analysis: (1) Mild/Rebound, (2) Moderate/Rebound, (3) Severe/Stable.
What is 'rebound' in context of a tx outcome study, you may ask? It's not tx failure/inefficacy, you see, nonono...it's (euphemistically labeled) 'rebound.' Meaning, though there may have been some pre-post PCL-5 scores at the end of an expensive, intensive, evidence-based episode of residential tx, the average pt 'rebounded' (is that a new term for 'relapsed?') back to their pre-tx levels of symptomatology.
And the 'Severe/Stable' group? Well, back in the olden times, we may have used such terms as 'non-responders' or even spoken of the intervention as inefficacious for this subgroup (some utter troglodytes may have even dared to speak of 'treatment failure').
"Stable." I'd love to see some data of that group's MMPI-2-RF or SIMS testing results. By the way, this 'Severe/Stable' group made up the majority of participants in the study (51.8% of non-EBP and 58.5% of the EBP treatment group). The top researchers appear to be looking in vain everywhere for the elephant in the room but are failing to check the middle of the room.
Looking at the outcome data in the study itself, it is plainly clear that these expensive, intensive residential courses of treatment simply didn't work. Yet, due to the connotative sorcery of carefully chosen words/labels, the sample was split into two groups of "Rebounders" and one severely impaired group that was "Stable" in their pathology pre-tx, post-tx, and at 4 month follow up.
If the 'evidence-based' tx components hadn't been courses of well-beloved protocols like CPT/PE but, rather, some form of controversial (to the academicians) 'non-evidence-based treatment' do you think that the authors would have characterized the patients in the study as 'Stable' or 'Rebounders' or do you think the authors would be more critical of the presumed efficacy of the treatment approaches for the population under study in the face of clearly contradictory findings? Well, I suppose that some treatment approaches are truly 'more equal' than others in the face of empirically-confirmed failure to work.
"Treatments That Work."
Give me a break.
The emperor is buck naked, dude.