PsyD vs PhD Dissertation?

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I found that helpful, Ollie. Stats training in my doctoral program was abysmal. I know there's a competing thread on this, but I'd be interested in hearing from the folks here who made/responded to the stats critique what particular book(s) they'd recommend in order to "catch up" on stats/research design.

For multivariate, Tabachnick & Fidell's "Using Multivariate Statistics" (currently in its 5th edition, I believe) is one of the gold standards. It also has a few chapters at the beginning to catch you up on the basics of univariate analyses and other factors that are important in using and understanding multivariate stats, although that's of course no substitute for a good intermediate-level textbook.

I don't remember the name or authors of my intermediate book, but I'll edit this post with the info once I get home.
 
I have an agenda, so I'm going to hijack this thread for a minute and ask about this. How would you describe a lack of competency in basic research and statistics? What is a basic level of statistical/research knowledge--are we talking knowing only t-tests, only knowing mean/median/mode, or something more like ANOVA or regression and nothing more advanced than that? My agenda is to make sure that *I* am statistically competent, so I want to know how I should strive. I've heard people scoff at theses that "just use t-tests" so that is what I think when I think "basic statistics". But maybe it's more than that? Maybe ANOVAs and regressions are basic, too? That's the level I'm at, and I want more, so I want to know if that's appropriate 😳.

I'd second what T4C and Ollie said in response.

This student (from a FSPS) had already completed their dissertation project and couldn't create a z-score (and didn't seemt o understand what it was), let alone conduct a basic ANOVA to compare a few groups and do post-hocs. That is grad school stats 101 kind of stuff.

I think setting a standard for what should/shouldn't be a competency ought to be done by APA (isn't it already)? Even with a Psy.D., you've got to be able to be a good consumer of the literature. IMO, you can't really do that without first doing some matrix algebra (before learning the point-and-click stuff) before doing multivariate, for example. Anyone can learn step-by-step SPSS stuff. Real statistics is about conceptual understanding. I notice a tendency from undergraduate students and some of the FSPS students I have come across to just take whatever they see and accept it, without knowing HOW to be a critical consumer. You can't really critique the research methods or statistics without having done a little bit of it yourself.

I'd consider regression to be basic as well. I wish everyone was required to take multivariate, factor analysis, HLM, and SEM, but my head is in the clouds.

wigflip, I'd recommend a class or workshop for stats if they were lacking. At least for me, the interactions and discussions were more helpful and then using books to supplement were great. I still refer to my Tabachnick and Fidell when I am running a multivariate analysis I haven't done in awhile. If you use M-Plus, their discussion boards are great.

Edit: Haha AA and I think alike
 
I'd second what T4C and Ollie said in response.

This student (from a FSPS) had already completed their dissertation project and couldn't create a z-score (and didn't seemt o understand what it was), let alone conduct a basic ANOVA to compare a few groups and do post-hocs. That is grad school stats 101 kind of stuff.

I think setting a standard for what should/shouldn't be a competency ought to be done by APA (isn't it already)? Even with a Psy.D., you've got to be able to be a good consumer of the literature. IMO, you can't really do that without first doing some matrix algebra (before learning the point-and-click stuff). Statistics is about conceptual understanding. I notice a tendency from undergraduate students and some of the FSPS students I have come across to just take whatever they see and accept it, without knowing HOW to be a critical consumer. You can't really critique the research methods or statistics without having done a little bit of it yourself.

I'd consider regression to be basic as well. I wish everyone was required to take multivariate, factor analysis, HLM, and SEM, but my head is in the clouds.

wigflip, I'd recommend a class or workshop for stats if they were lacking. At least for me, the interactions and discussions were more helpful and then using books to supplement were great. I still refer to my Tabachnick and Fidell when I am running a multivariate analysis I haven't done in awhile. If you use M-Plus, their discussion boards are great.

Edit: Haha AA and I think alike

👍

And *shudder* @ the matrix algebra comment. Not that it was particularly horrible or that we spent an incomprehensible amount of time on the topic (maybe 1.5 of our once-weekly seminar classes), but something about it just wasn't in my comfort zone. Then again, math in general is often outside my comfort zone, so I probably shouldn't have been surprised.
 
wigflip, I'd recommend a class or workshop for stats if they were lacking. At least for me, the interactions and discussions were more helpful and then using books to supplement were great. I still refer to my Tabachnick and Fidell when I am running a multivariate analysis I haven't done in awhile. If you use M-Plus, their discussion boards are great.

Edit: Haha AA and I think alike

Thanks, folks, much appreciated. Happy to take the intermediate level book recommendation as well when that surfaces. Sadly, interactions and discussions were not remotely helpful in my version of the classes, hence my book query...
 
So, OP*, does that answer your question?

*noticeably absent

Haha, I forgot to check back on the forum and MAN did i bring up the right (or wrong) topic!
anyways, I guess I just wanted to get second opinions from other people about the issue, because I am pretty much split between PsyD and PhD and it didn't make sense to go into PsyD that has less opportunities (teaching/reserach) than PhD if PsyD dissertation is just as difficult as PhD. Also, if its true about how PhD has evolved to have more clinical aspect (or just as much as PsyD), wouldn't that defeat the whole purpose of PsyD degree? even if it takes and extra year?

and last quick question... does PhD course require multiple publication besides the dissertation?

sorry if i sound like I don't like research or something... it's just that I haven't had any real experience in research (i'm planning to find some next semester) and i just don't like writing papers!
 
Haha, I forgot to check back on the forum and MAN did i bring up the right (or wrong) topic!
anyways, I guess I just wanted to get second opinions from other people about the issue, because I am pretty much split between PsyD and PhD and it didn't make sense to go into PsyD that has less opportunities (teaching/reserach) than PhD if PsyD dissertation is just as difficult as PhD. Also, if its true about how PhD has evolved to have more clinical aspect (or just as much as PsyD), wouldn't that defeat the whole purpose of PsyD degree? even if it takes and extra year?

and last quick question... does PhD course require multiple publication besides the dissertation?

sorry if i sound like I don't like research or something... it's just that I haven't had any real experience in research (i'm planning to find some next semester) and i just don't like writing papers!

I'm not at all telling you not to apply (and I'm afraid I can't weigh in on the whole PhD vs. PsyD debate), but if you really don't like writing papers I think you would be pretty miserable doing a PhD. From what I understand, there's a lot of paper-writing in grad school!
 
Wish I could make specific book recommendations, but honestly I feel like its more about spending time reading the primary literature, critiques, etc. The Dead Salmon paper is a great example - it was a far better description/analysis of a common statistical issue than any stats text I've ever seen has given. Most texts go into "more" detail than I feel is necessary for someone going into practice. Like I said above, I don't really care if someone remembers the formula to compute pooled standard error - I love this stuff and I usually have to look those things up. On the other hand, knowing why a meta-analysis that used a fixed-effect method should be looked on suspiciously (in most cases).

I guess if I had to put one out there, I'd suggest the "Reading and Understanding Multivariate Statistics" series - it certainly isn't enough for someone wanting to conduct analyses but gives short, concise overviews of major techniques. It seems geared towards those who want to be "informed consumers". Mostly though, I think its important to just reading papers and thinking about statistical issues. The Sternberg paper that came up in the GRE thread is a great example. I worry significantly about anyone in the field who doesn't understand the problems with that paper. Same thing with the dodo bird papers, the Shedler article, etc. All of these papers were valuable contributions to the literature, but there are an astonishing number of people who seem to read the abstracts and take everything at face value. I expect that from UGs, but I wish I didn't have to for professionals.
 
Wish I could make specific book recommendations, but honestly I feel like its more about spending time reading the primary literature, critiques, etc. The Dead Salmon paper is a great example - it was a far better description/analysis of a common statistical issue than any stats text I've ever seen has given. Most texts go into "more" detail than I feel is necessary for someone going into practice. Like I said above, I don't really care if someone remembers the formula to compute pooled standard error - I love this stuff and I usually have to look those things up. On the other hand, knowing why a meta-analysis that used a fixed-effect method should be looked on suspiciously (in most cases).

I guess if I had to put one out there, I'd suggest the "Reading and Understanding Multivariate Statistics" series - it certainly isn't enough for someone wanting to conduct analyses but gives short, concise overviews of major techniques. It seems geared towards those who want to be "informed consumers". Mostly though, I think its important to just reading papers and thinking about statistical issues. The Sternberg paper that came up in the GRE thread is a great example. I worry significantly about anyone in the field who doesn't understand the problems with that paper. Same thing with the dodo bird papers, the Shedler article, etc. All of these papers were valuable contributions to the literature, but there are an astonishing number of people who seem to read the abstracts and take everything at face value. I expect that from UGs, but I wish I didn't have to for professionals.

The "Little Jiffy" paper is also a good read when it comes to the inappropriate use of PCA. It provides a good reason for why a lot of measurement tools out there have overinflated factor loadings, etc, in large part due to the bunk statistical procedures that were prevalent in clinical research (and many still use despite plenty of reason not to).

Another reason why you should know your stats? Because clinicians are often not good at them, and publish papers in clinical journals with subpar stats, reviewed by clinicians who also are not good at them. It is an area where, as T4C mentioned earlier, everyone could use some higher minimal standards.

Edit: paper link added for those looking for resources

http://www.biostat.jhsph.edu/~jfeder/Journal_Club_files/Understanding%20Statistics%202003%20Preacher.pdf
 
Last edited:
All grad students should take a course mastering Tatsuoka, 1971.

(I kid, I kid).
 
While t-tests/ANOVA may in fact be perfectly fine for your own data if you're doing purely experimental research, I too would argue that you need a much higher standard than that to be an informed consumer of research. Effect sizes, nonparametric tests, ways of dealing with 'misbehaved' data (transformations, trimming, etc), multivariate analysis, ANCOVA, some basic modelling, as well as the basic assumptions of each test are good to know. Plus validity and reliability, as mentioned. It's not just knowing the tests, but knowing when they're appropriate or not, and also if the appropriate significance level is being used (e.g. for multiple comparisons), etc.
 
I'm not at all telling you not to apply (and I'm afraid I can't weigh in on the whole PhD vs. PsyD debate), but if you really don't like writing papers I think you would be pretty miserable doing a PhD. From what I understand, there's a lot of paper-writing in grad school!

You become good at it, even if you don't enjoy it, it's about becoming more efficient as a writer when you're working on a Ph.D.

🙂

Mark
 
Oh man, I can't imagine going through a PhD program loathing writing. It would be painful...

+1. I actually enjoy writing, and it even wears on me now and again. Although keeping a variety of materials to switch between (and by "variety," I pretty much mean journal articles and assessment reports) can help.
 
+1. I actually enjoy writing, and it even wears on me now and again. Although keeping a variety of materials to switch between (and by "variety," I pretty much mean journal articles and assessment reports) can help.

Or even different/different types of journal articles.
 
I can't speak for Pragma, but for me "basic competency" is less about running the analyses and more about being able to read (and understand!) analyses. If someone is planning on going 100% clinical - that is what I think the goal should be, and I think that was the original intent of the Vail model (though many schools have obviously pushed that to giving doctorates to people with undergrad-level knowledge).

t-tests, mean/median/mode are the bare, bare basics - I'd worry if someone got ADMITTED to grad school without knowing that, its shameful when UGs don't know it. ANOVA/Regression I would also place in the "basics" category, as most other techniques are based on these. I'd go a little further than that before declaring someone has "basic competency" in stats. Some things (post-hocs, power analysis, etc.) are important to understand. This is a personal belief, but I think especially for folks planning on clinical careers, meta-analysis is an absolute necessary. Again, I'm not talking about knowing how to compute power for a meta-analysis of imaging data with nothing but a calculator, but given its role in EBP I think a basic foundation (knowledge of common effect size measures, how to catch common flaws, etc.) is crucial.

Basically - I'd say its sufficient when one can read the bulk of the literature in top tier journals and "get" the results section, at least noting any major flaws (i.e. why are they running ANOVA when they have a categorical DV?), even if some of the more nuanced issues elude them.

After reading though this thread, I think the discussion on dissertations, while warranted and interesting, misses the mark. However, I think Ollie captures the importance of the issue at hand.

From a professional development standpoint, the completion of dissertation can serve a variety of purposes. We need to consider long-term career goals.

For someone interested in a career in academia, a thesis and/or dissertation can serve as a springboard to posters, oral presentations, and publications. One's level of productivity, as well as their ingenuity in applying experimental methods and the originality of their research program, are essential aspects of building an academic career. Consequently, I would suspect that individuals with these particular goals would pursue a much more extensive and nuanced understanding of clinical science. Greater effort in studying experimental methodology, basic and complex statistics, and basic psychological science would all be critical components. The nature, depth, and complexity of their dissertations should reflect their goals. Would everyone agree?

For one interested in a career in clinical practice, a thesis and/or dissertation may serve an entirely different goals. To be specific, the development of skills that allow one to identify relevant research to their scope of practice, comprehend the findings of such work, and critically evaluate the findings and methodology can be seen as much more desirable than publications. From this point of view, the dissertation provides an opportunity critically evaluate the literature, study and apply experimental methodology, implement statistical analyses, interpret data, and highlight how the results fit into the broader context of our field. In this light, the dissertation promotes the acquisition of knowledge in these areas through guided experiential learning, whereby a student applies basic skills in an effort to understand the research process. Within the Vail framework, this truly helps one develop competency in the consumption of research.

To Ollies point, the ability to comprehend the essential aspects of treatment outcome research and related clinical research is what is important. Having said that, I think we can all agree that comprehending complex statistics (i.e., SEM, latent class analyses) is far less important to the practitioner than their ability to comprehend more basic statistics (i.e., t-tests, ANOVAS, effect-sizes, odds-rations, survival analyses) that are used frequently in treatment outcome studies. Yet, for the academic, SEM is becoming a more frequently used tool that allows for the evaluation of theories and hypothesis. Are the bulk of studies practitioners read going to employ such complex statistics? Probably not. Is the practitioners ability to understand such statistics influence their ability to practice? Probably not. Is there ability to understand experimental methodology going to influence there ability to formulate a empirically based treatment plan? Absolutely.

We need to consider the contexts in which people work and the knowledge they need to acquire in order operate at a competent level. So while qualitative and quantitative differences in dissertations exist across PhD and PsyD programs, we need to consider the goals of the students and how their education prepares them for those goals.
 
We need to consider the contexts in which people work and the knowledge they need to acquire in order operate at a competent level. So while qualitative and quantitative differences in dissertations exist across PhD and PsyD programs, we need to consider the goals of the students and how their education prepares them for those goals.

Absolutely. But there is the danger of programs being too liberal about how they interpret what a dissertation is for. You end up with projects about human-raven relations. You end up with people graduating from FSPS programs that 'don't like numbers and math" (yes, I have heard that stated).

When a program does not demand a basic level of quality from their students, it tarnishes the quality of a "doctoral" level profession.

As to your distinctions that you made (e.g., people who want to be in academia vs. those going into practice), I'd argue that practioners still need to have a strong level of competency in both research methods and statistics. There is some crappy research published out there that deserves to be critiqued, but a lot of people just assume that since it was published it must be gospel. That's dangerous. Then, of course, there are people who mis-interpret what they are reading, and perhaps applying something to their clinical work inappropriately.

I think some of us are arguing in this thread that the bar should be raised across the board for psychologists at the doctoral level. Some of us believe that strong scientific training is fundamental to the doctoral level student - even those who want to be practioners. Otherwise, why not just give them master's degrees and call it a day?
 
Otherwise, why not just give them master's degrees and call it a day?

This is the key part for me. From the folks I've worked with, a large number of programs are NOT providing doctoral level training. They've simply slapped a"Doctorate" sticker on what is quite obviously a master's-level education (and in many cases...easier to get into than a great many master's programs).

The bar is too low. Fully agree. We (as a profession) seem to keep moving it down rather than up. I agree that something like SEM is probably not critical to know the nuances of for someone going into private practice. However, I don't think its unreasonable to expect a practitioner to be able to pick up an article that includes SEM and do more than just skip over the results section and trust its right. After all, examination of mechanisms is more or less required to publish in journals like JCCP these days, and SEM is one of the most common ways to conduct mediation analyses. We need to raise the bar, trim the fat, and start setting some meaningful standards before people start to lose respect for the field. Sadly, I'm not convinced that loss of respect would be unjustified the way things are going right now.
 
I agree that something like SEM is probably not critical to know the nuances of for someone going into private practice. However, I don't think its unreasonable to expect a practitioner to be able to pick up an article that includes SEM and do more than just skip over the results section and trust its right. After all, examination of mechanisms is more or less required to publish in journals like JCCP these days, and SEM is one of the most common ways to conduct mediation analyses.

Oh I totally agree. Obviously we can't expect everyone to know about how to use every statistical method, but if you aren't familiar at a basic level, how can you be a good consumer of the literature? How can you apply what you are learning from an article if you don't have a working understanding of the statistical methodology that is used? I really hope that people don't just skip to the discussion section. Why can't we just specify in the APA accreditation what competencies we expect people to have with regard to statistics? I'd imagine we can expand on what is already there.

That's what confuses me. I don't know everything about statistics, but I was trained extensively enough that I know where to look if there is something I don't understand. I fear that not all programs are training people to be able to problem solve like this. It does make people lose respect for the field. Heck, even within the field, I've heard (some pretty prolific researchers) folks say "Oh, well it's just clinical research. We aren't trying to get it into a social psych journal or anything." There is a reason for that attitude that transcends just the poor training at FSPS - I think it reflects laziness and acceptance of mediocrity.
 
This is the key part for me. From the folks I've worked with, a large number of programs are NOT providing doctoral level training. They've simply slapped a"Doctorate" sticker on what is quite obviously a master's-level education (and in many cases...easier to get into than a great many master's programs).

That is a great way to sum it up. I am going to steal it, but I will try to remember to cite you.
 
As others have already pointed out, the differences (if any) between PhD and PsyD dissertations depends on the program. However, it may be likely (note that I use research language here - hehe) that a PhD dissertation is ubiquitously greater than or equal to the difficulty of a PsyD dissertation. In my own viewing of PhD and PsyD dissertations, the latter often appear less ambitious.
 
Then, of course, there are people who mis-interpret what they are reading,

How much (and what) are licensed practitioners (both at the masters and doctoral level) actually reading, if anything? Surely this has been investigated, no? Anyone have a good citation?
 
How much (and what) are licensed practitioners (both at the masters and doctoral level) actually reading, if anything? Surely this has been investigated, no? Anyone have a good citation?

Well one would hope that they are actually reading. This is supposed to be a fundamental concept to the Vail training model (consumer of literature, practice informed by current research).

I'd like to see a study about this, but I am not sure there would be much available aside from self-report, which would be biased I'd imagine.

To a certain extent, CE requirements by state licensing boards may help to counteract professional apathy and resistance to staying updated.

As a funny aside - when I was on internship my neuropsych supervisor (male) always kept the latest version of JINS in the crapper at the office. This was considered required reading 😛
 
Well one would hope that they are actually reading. This is supposed to be a fundamental concept to the Vail training model (consumer of literature, practice informed by current research).

I'd like to see a study about this, but I am not sure there would be much available aside from self-report, which would be biased I'd imagine.

To a certain extent, CE requirements by state licensing boards may help to counteract professional apathy and resistance to staying updated.

As a funny aside - when I was on internship my neuropsych supervisor (male) always kept the latest version of JINS in the crapper at the office. This was considered required reading 😛

I love calling it "the crapper"! 😀

Re: reading. Clearly some folks keep up. And I'm sure that, if surveyed, many practitioners would over-report the amount of J articles they read. I still think it's a huge assumption that most practitioners are reading above and beyond the CE requirements, and like you, would like to see a study about it.

And on a related note, since folks above have suggested that a PsyD without solid statistical and methodological skills could be considered a glorified masters practitioner--what about masters practitioners, especially given the prevailing "encroachment" discourse? If most lack the formal institutional support (okay, let's just call it "training") to acquire the skills needed to fully understand the empirical literature, what are the implications for practice?
 
Last edited:
Top