PhD/PsyD Ask A Recent Graduate of a Professional School Anything

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
Again, different models skew these statistic to favor PhD model and EPPP is mostly general psychology. APPIC match is misleading due to imbalance and only 50% APA sites.

Again, Statistic lie!!! :)
"More supervision is good" and "more debt is bad" are "models"? I thought they were obvious facts...

You might go with the fact that there are reliably good PsyDs (Rutger's, Baylor, etc), or that some individuals with a lot of effort can be exceptions to the general trend (neither of which disputes the general contention the PhD programs are by-and-large superior to PsyD programs on all useful metrics). But what you're putting out here is untenable.

Members don't see this ad.
 
  • Like
Reactions: 1 user
People lie. Stats don't. Stats can be misused by people.

If you wanna be a conspiracy theorist, be my guest. But don't color future applicants with that logic, please.

PS: And, what "model" are you referring to that conceptualizes less supervision, less science, and heavy debt load as a part of their training model?
 
Last edited:
Norm referenced for determining quality of programs may give misleading numbers and this is no conspiracy! Student growth using ipsative measures tend to yield more accurate predictors of quality of programs. Again...PhD bias is a self serving bias based in fantasy, and not necessarily reality. Human factor engineering as in differential program characteristics may not be reduced to black/white or PhD/PsyD as broadly based differential components due to external validity characteristics of overlapping components exist among and between PhD and PsyD programs. Therefore, confounding and extraneous variables influence outcome variables and may be misleading. There are too many similarities among PhD and PsyD training that does not provide an adequate pretest-post test paradigm to adequately differentiate between the two training models with post test of APPIC match rate and EPPP performance without obvious results being inference to confounding factors.
 
Last edited:
Members don't see this ad :)
Are you just using fancy words for the fun of it?!

Not one word of that rambling post addresses or counters ANY of the issues referenced by myself, wiseneuro, or mike parent.
 
Last edited:
  • Like
Reactions: 2 users
Norm referenced for determining quality of programs may give misleading numbers and this is no conspiracy! Student growth using ipsative measures tend to yield more accurate predictors of quality of programs. Again...PhD bias is a self serving bias based in fantasy, and not necessarily reality. Human factor engineering as in differential program characteristics may not be reduced to black/white or PhD/PsyD as broadly based differential components due to external validity characteristics of overlapping components exist among and between PhD and PsyD programs. Therefore, confounding and extraneous variables influence outcome variables and may be misleading. There are too many similarities among PhD and PsyD training that does not provide an adequate pretest-post test paradigm to adequately differentiate between the two training models with post test of APPIC match rate and EPPP performance without obvious results being inferences to confounding factors.

Something smells like red herring. Fishy.
 
  • Like
Reactions: 1 users
Norm referenced for determining quality of programs may give misleading numbers and this is no conspiracy! Student growth using ipsative measures tend to yield more accurate predictors of quality of programs. Again...PhD bias is a self serving bias based in fantasy, and not necessarily reality. Human factor engineering as in differential program characteristics may not be reduced to black/white or PhD/PsyD as broadly based differential components due to external validity characteristics of overlapping components exist among and between PhD and PsyD programs. Therefore, confounding and extraneous variables influence outcome variables and may be misleading. There are too many similarities among PhD and PsyD training that does not provide an adequate pretest-post test paradigm to adequately differentiate between the two training models with post test of APPIC match rate and EPPP performance without obvious results being inference to confounding factors.

Everything I've previously said would more accurately be applied to the professional school vs. non-professional school dichotomy rather than explicitly to PsyD vs. PhD. It just so happens that the majority of professional schools offer the PsyD rather than the PhD.
 
If you really believe the PhD and PsyD model are substantially different models without significant overlap of training; you are either misinformed or you have been living in a cave.

Outcome data would provide improved understanding of psychology programs by combining PhD and PsyD program data and then looking at each individual program data in comparison to all of the other doctoral programs. It is bogus data manipulation to categorize PhD as being substantially different than PsyD programs. Furthermore, to inference quality of program based on APPIC match and EPPP score may be misleading due to skew factors since all doctoral level students would be at the upper extreme on any pretest measure used to compare with post test measures.
 
If you really believe the PhD and PsyD model are substantially different models without significant overlap of training; you are either misinformed or you have been living in a cave.

Outcome data would provide improved understanding of psychology programs by combining PhD and PsyD program data and then looking at each individual program data in comparison to all of the other doctoral programs. It is bogus data manipulation to categorize PhD as being substantially different than PsyD programs. Furthermore, to inference quality of program based on APPIC match and EPPP score may be misleading due to skew factors since all doctoral level students would be at the upper extreme on any pretest measure used to compare with post test measures.

So.... your assertion is that there is NO quantifiable metric, either alone or in combination, that speaks to program quality? Is this what you are saying?

If that is not what your are saying, please list the metric(s) that you think indicate program quality?
 
Outcome data would provide improved understanding of psychology programs by combining PhD and PsyD program data and then looking at each individual program data in comparison to all of the other doctoral programs.

So, what Williamson and I did. And showed that 14/15 of statistical outliers were PsyD programs.
 
  • Like
Reactions: 1 users
So, what Williamson and I did. And showed that 14/15 of statistical outliers were PsyD programs.

I was just about to cite: Parent, M. C., & Williamson, J. B. (2010) Program disparities in unmatched internship applicants. Training and Education in Professional Psychology, 4, 116-120.
 
If you really believe the PhD and PsyD model are substantially different models without significant overlap of training; you are either misinformed or you have been living in a cave.

Also, from a more subjective perspective, although it has objective elements, as someone who went a Ph.D program and then taught in Psy.D program, EVERYTHING about the training was different for those students than what I experienced. Literally the only commonality was that we both saw patients on practicum. Expectations, goals, philosophy, approach, focus, supervision, oversight, structure, etc. It was ALL markedly different.
 
So, what Williamson and I did. And showed that 14/15 of statistical outliers were PsyD programs.

This could be due to non PhD/PsyD factors but due to ipsative factors within specific programs. I am not sure of the data, but my guess is there are more PsyD programs and students than PhD programs and students.

Most of current APPIC statistics combine PhD with EdD programs and students. So does this mean EdD students are from superior programs?
 
cab1234, I'm very grateful, as others, for your candor and forthrightness.

I'm on the cusp of attending a FSPP for the PsyD myself, this fall. I, too, am pursuing the Great Trifecta of research, teaching, and practice. For those who wonder, after 3 years of Ph.D. applications (in which I clearly delineated my 2 years of research experience in a developmental lab at Harvard, 1 year as a clinical research interviewer in a renowned developmental psychobiology lab, 2+ years of clinically-related work in the field, totally acceptable - though admittedly not off-the-charts - GREs, and 1 co-authored article submitted for publication), I finally succumbed to the conclusion that it simply is not always a meritocracy. I refuse to spend my entire life applying in vain, and do not simply want to be a therapist at the masters level.

I plan to investigate internships sites early and tailor my pracs accordingly. I also plan to remain involved in the lab in which I'm currently a clinical research interviewer part-time. I have an idea for the program of research I'd love - someday - to mount. I'll probably elect to take some advanced stats courses on my own time (in all my fabulous free-time, of course).

Any words of wisdom, cab? I'm so wishing I weren't standing here staring straight up at the mountain of future debt before me. But here I am.

I understand that you feel beaten down by the system, and it sucks that you haven't gotten an acceptance with stats like that.

But as random as this process might seem, internship apps are even worse. Even if you do tailor your apps and have a good level of research involvement, the odds are not in anyone's favor, and especially not for professional school students. There will be internship sites that will throw out your app without even looking at it (as SDNers involved in apps can attest to). Yes, you could very well be the exception--people have done it, including the OP. However, it is possible that you will not be the exception and that you will fail to match. I don't recommend entering a situation like that in which the odds are already stacked against you. Grad school rejection is awful, but at least you haven't invested years of your time and money like you have by the time of internship apps.
 
  • Like
Reactions: 1 user
Members don't see this ad :)
This could be due to non PhD/PsyD factors but due to ipsative factors within specific programs. I am not sure of the data, but my guess is there are more PsyD programs and students than PhD programs and students.

Most of current APPIC statistics combine PhD with EdD programs and students. So does this mean EdD students are from superior programs?

Neurodoc,

This article was peer reviewed and published. I think the least you could do is read the thing before arm-chairing criticisms about the data and the conclusions. Perhaps your hypotheses was even examined in the article. Imagine that. I didn't even write it and I find that post a bit disrespectful.
 
Also, from a more subjective perspective, although it has objective elements, as someone who went a Ph.D program and then taught in Psy.D program, EVERYTHING about the training was different for those students than what I experienced. Literally the only commonality was that we both saw patients on practicum. Expectations, goals, philosophy, approach, focus, supervision, oversight, structure, etc. It was ALL markedly different.

Hah.... It is common for faculty in PsyD programs to be from PhD programs. In my internship, one PhD intern in my cohort did not have any assessment courses as she was teaching and doing research in her program and the expectation was she would get assessment training during internship. She graduated and now she is teaching in a PsyD program and she has no interest in licensure as she wants to teach.

Confusing... If PsyD model is inferior than why do so many with PhD choose to teach in these programs?
 
I understand that you feel beaten down by the system, and it sucks that you haven't gotten an acceptance with stats like that.

But as random as this process might seem, internship apps are even worse. Even if you do tailor your apps and have a good level of research involvement, the odds are not in anyone's favor, and especially not for professional school students. There will be internship sites that will throw out your app without even looking at it (as SDNers involved in apps can attest to). Yes, you could very well be the exception--people have done it, including the OP. However, it is possible that you will not be the exception and that you will fail to match. I don't recommend entering a situation like that in which the odds are already stacked against you. Grad school rejection is awful, but at least you haven't invested years of your time and money like you have by the time of internship apps.

This boils down to the catchy situation people like this are forced to deal with:

1. Attend a master's program and be content with the salary and professional options associated with this level of education.
2. Keep on applying year after year in hopes that they will increase their odds to gain acceptance into a program that traditionally has a very small acceptance rate. (This assumes that they are distributing their applications to many programs and of varying tiers).
3. Find yet another career to start from scratch despite spending a lot of time in psychology.

I can definitely empathize with this poster, as a career switcher myself having spent 15 years in my prior career. Simply saying "oh well, you just aren't cut out for it" is not a resolve. This assumes that the only variables they are considering are (cost of degree, expectancy of salary and the difference between those two in terms of potential debt assumed).

Tough choice, but by no means isolated to just handful of variables deemed "the most important."
 
I think most reasonable people reading this thread, whatever their experience level, would view this as you being relatively obtuse and stubborn about the issue, as you have not really been able to counter why the traditionally accepted metrics do not carry any indication of program quality.

This an uphill battle you are trying to fight here, and I don't hear or see any actually evidence to counter our evidence. Rather, I only hear philosophizing, and rather poor philosophizing at that.
 
Hah.... It is common for faculty in PsyD programs to be from PhD programs. In my internship, one intern in my cohort did not have any assessment courses as she was teaching and doing research and the expectation was she would get this training during internship. She graduated and now she is teaching in a PsyD program and she has no interest in licensure as she wants to teach.

Confusing... If PsyD model is inferior than why do so many with PhD choose to teach in these programs?

I dont know. Why does it matter? Its well known that Ph.Ds go into academic positions at higher rates than Psy.Ds. So, seems quite logical, no? Was your program faculty all Psy.Ds or something?

And what exactly are you confused about? My disagreement with an approach does not impact my wanting to help those people. I enjoy teaching. I feel that I can positively impact students with my teaching. I dont support or agree with opiate abuse either, but I certainly work to help opiate abusers in anyway that I can. I dont particularly support gangs and gang violence, but this doesn't mean I don't want to work with and help gang members, does it?!

And...you still have yet to answer my very reasonable question from post #58.
 
Last edited:
I think most reasonable people reading this thread, whatever their experience level, would view this as you being relatively obtuse and stubborn about the issue, as you have not really been able to counter why the traditionally accepted metrics do not carry any indication of program quality.

This an uphill battle you are trying to fight here, and I don't hear or see any actually evidence to counter our evidence. Rather, I only hear philosophizing, and rather poor philosophizing at that.

Individual/ipsative factors need to be evaluated rather than primarily norm reference factor based on the population. Skew ness toward upward extreme for psychologist with doctoral degree. EPPP score is not a solid outcome measure as most have to retake it and eventually pass the test. Do you really believe EPPP pass rate is a reliable measure for program quality?

All of my program faculty were PhD except for one PsyD and she was the clinical training director and taught Statistic 1 and 2, research design/SPSS and psychometric theory.
 
Individual/ipsative factors need to be evaluated rather than primarily norm reference factor based on the population.

Why? We are attempting to draw conclusions about seemingly large groups here, right? Not individuals!

And, with respect, "most have to retake it" might just be your experience coming from a FSPP and their alumni. This is NOT the norm.

Do you really believe EPPP pass rate is a reliable measure for program quality?

And yes, I do. Content is applicable to the both the application and basic bench science of psychology. Something a person should be getting during a doctoral degree. Face validity. Programs with reputations for poor training and outcome have markedly higher fail rates. Convergent validity. What say you?

Would you now care to answer this question below from post 58?
So.... your assertion is that there is NO quantifiable metric, either alone or in combination, that speaks to program quality? Is this what you are saying?

If that is not what your are saying, please list the metric(s) that you think indicate program quality?
 
Last edited:
Why? We are attempting to draw conclusions about seemingly large groups here, right? Not individuals!

And, with respect, "most have to retake it" might just be your experience coming from a FSPP and their alumni. This is NOT the norm.



And yes, I do. Content is applicable to the both the application and basic bench science of psychology. Something a person should be getting during a doctoral degree. Face validity. Programs with reputations for poor training and outcome have markedly higher fail rates. Convergent validity. What say you?

Would you now care to answer this question below from post 58?

When you look at the bottom line, eventual licensure to engage in Independent Practice of Psychology is the outcome metric necessary to evaluate successful training.

People take different paths to reach the same goal. This does not necessarily add credence to either paths chosen.

Training outcome is individual determined rather than group determined as you cannot isolate the impact of group variables.

Many PhD psychologist cringe when they talk about their training as they fully understand now how useless was their research training for the work they now do as a psychologist.

One of my mentors, worked as a special education teacher for ten years before returning to obtain his MS degree in counseling & guidance. He then returned and finished his PhD in his forties in a non APA accredited program. Now in his late sixties, he is on the state licensing board of psychologists and it took him three times before passing the EPPP. He had to spread out his internship among many private practice psychologist as was common in the 70's.

Looking at the metrics of EPPP and his internship he would be considered from an inferior training program by norm reference but using ipsative measures he is highly successful.
 
Last edited:
Now, would you are to answer the very reasonable question from post 58?

So.... your assertion is that there is NO quantifiable metric, either alone or in combination, that speaks to program quality? Is this what you are saying?

If that is not what your are saying, please list the metric(s) that you think indicate program quality?
 
Last edited:
Training outcome is "individually determined?" 100% of the variance, huh? Do you have some factor analytic study I am unaware of here?

And, "cannot isolate the impact of group variables." Are you vaguely familiar with research methodology?!

If your assertion were known to be true (rather than just your opinion) then why do we have supervision? Classes? Tests? Why do we have graduate school at all?! Your argument are getting less and less logic the more we do this.
 
Last edited:
One of my mentors, worked as a special education teacher for ten years before returning to obtain his MS degree in counseling & guidance. He then returned and finished his PhD in his forties in a non APA accredited program. Now in his late sixties, he is on the state licensing board of psychologists and it took him three times before passing the EPPP. He had to spread out his internship among many private practice psychologist as was common in the 70's.

Looking at the metrics of EPPP and his internship he would be considered from an inferior training program by norm reference but using ipsative measures he is highly successful.

And why do you keep telling us all these stories about individuals you know? Stop it. We are discussing groups and group outcomes here. I could care less about your friend...or any individual case that one picks out of a group. We are talking about group trends. Get it?
 
Last edited:
  • Like
Reactions: 3 users
Lastly, I undoubtedly use many of the scientific skills and research skills I learned in my ph.d. everyday in my clinical work. Whether its running reliable change index on an symptoms measure, running stats on clinic utilization rates, or keeping base rates in mind when diagnosing. I feel sorry for colleagues who are not using their scholarly skills each day. But that's their CHOICE.
 
The metric from my perspective would be from state psychology licensing boards ten year post graduation to evaluate program quality of training. For psychologist, a larger weight on qualitative factors over quantitative factors determines long term success.

There is no uniformity of training standards even when a program is APA accredited, regardless of PhD or PsyD and there are more similarities than differences.

ERG and Parent... I known you mean well and you are both early career psychologist, but my guess is that once you are seasoned psychologist after years of experience that your opinion could change as has many of the PhD psychologist I know who value the PsyD training model.
 
Ok. I think it should be made known, and crystal clear, that your assertion is that there are no informative indicators of program quality until "ten years post graduation" and than most are "qualitative factors" (i.e., not able to be measured). Is this correct?

If so, the obvious implication is that literally hundreds of thousands of doctoral-level scientists, faculty, statisticians, clinicians, and administrators are WRONG.

So, I will just give you a minute to think about that...
 
Last edited:
.
 
Last edited:
ERG and Parent... I known you mean well and you are both early career psychologist, but my guess is that once you are seasoned psychologist after years of experience that your opinion could change as has many of the PhD psychologist I know who value the PsyD training model.
If your intention with things like this, and silly thesaurused posts, is to sound wise, I have bad news.
 
  • Like
Reactions: 1 user
Licensure rates are usually available on each individual program's website; I believe it's actually one of the data points that APA requires programs to list in their outcomes section. And rates of licensure for professional schools are by and large lower than for more traditional, university-based programs. Which seems counter-intuitive, given that you'd expect more purely academic (and thus possibly non-licensed) folks to have come from the university-based programs.

Again, I don't think the issue is really PhD vs. PsyD per se. However, the data indicates that the "worst offender" programs (e.g., those identifed by MCParent in his article) happen to offer the PsyD degree.

And even though grad school psych folks do, in general, tend to fall on the upper-end of the academic achievement spectrum when compared to the ubiquitous population at large, if all we're doing is comparing said grad school psych folks to one another, then we've essentially created a new distribution that should be fairly normal in its characteristics. Thus, the outcome metrics we've discussed retain their utility. It's not like we're comparing EPPP passing rates for clinical/counseling psych folks to the US population of undergraduate students or something.

RE: most people having to re-take the EPPP, I believe most people who take the exam (and most first-time test takers) actually pass it. The aggregate 2012 data from the ASPPB website, for example, indicates that >70% of folks passed.

As long as we're going with anecdotes, I personally don't know any practicing psychologists who've bemoaned their extensive research training. Sure, some wish they'd had more clinical training, but none felt unprepared to practice based on said clinical training, and not a one felt that their research experience was going unused.
 
  • Like
Reactions: 1 user
We will just have to agree to disagree. APPIC match and EPPP pass rate are not appropriate measure of quality of program when other individual factor need to be considered.

Curriculum factors, practicum supervision over three full years and development of clinical skill competence are more valued in development of psychologists.

It may be difficult to judge unless you have been in both training models. The re specialization students have had this opportunity and some of them flourished under the PsyD model with practicum courses and intensive supervision of clinical skills development that they did not receive in their PhD program. Furthermore some of the students switched from PhD to PsyD program after finishing their MS degree as they were unhappy with emphasis on research in their PhD program, and they too welcomed the practicum and intensive supervision under the PsyD model.
 
Licensure rates are usually available on each individual program's website; I believe it's actually one of the data points that APA requires programs to list in their outcomes section. And rates of licensure for professional schools are by and large lower than for more traditional, university-based programs. Which seems counter-intuitive, given that you'd expect more purely academic (and thus possibly non-licensed) folks to have come from the university-based programs.

Again, I don't think the issue is really PhD vs. PsyD per se. However, the data indicates that the "worst offender" programs (e.g., those identifed by MCParent in his article) happen to offer the PsyD degree.

And even though grad school psych folks do, in general, tend to fall on the upper-end of the academic achievement spectrum when compared to the ubiquitous population at large, if all we're doing is comparing said grad school psych folks to one another, then we've essentially created a new distribution that should be fairly normal in its characteristics. Thus, the outcome metrics we've discussed retain their utility. It's not like we're comparing EPPP passing rates for clinical/counseling psych folks to the US population of undergraduate students or something.

RE: most people having to re-take the EPPP, I believe most people who take the exam (and most first-time test takers) actually pass it. The aggregate 2012 data from the ASPPB website, for example, indicates that >70% of folks passed.

As long as we're going with anecdotes, I personally don't know any practicing psychologists who've bemoaned their extensive research training. Sure, some wish they'd had more clinical training, but none felt unprepared to practice based on said clinical training, and not a one felt that their research experience was going unused.

If the 30% who failed are scattered across programs relatively randomly or equally, then I can see the argument that failing is NOT a indicator or programs quality. But that's not what we see.

And, I think its relatively common sensical for a test whose content is psychological science and the application of that science (even if that content is rather ridiculous or irrelevant at times, I admit) to be correlated with other agreed upon measures of program quality.
 
We will just have to agree to disagree. APPIC match and EPPP pass rate are not appropriate measure of quality of program when other individual factor need to be considered.

Curriculum factors, practicum supervision over three full years and development if clinical skill competence are more valued in development of psychologists.

It may be difficult to judge unless you have been in both training models. The re specialization students have had this opportunity and some of them flourished under the PsyD model with practicum courses and intensive supervision of clinical skills development that they did not receive in their PhD program. Furthermore some of the students switched from PhD to PsyD program after finishing their MS degree as they were unhappy with emphasis on research in their PhD program, and they too welcomed the practicum and intensive supervision under the PsyD model.

Uh, well given the overwhelming amount of evidence presented here, and the lack of evidence presented by you, I think most people are going to agree that you are simply playing dumb here.
 
Last edited:
We will just have to agree to disagree. APPIC match and EPPP pass rate are not appropriate measure of quality of program when other individual factor need to be considered.

Curriculum factors, practicum supervision over three full years and development if clinical skill competence are more valued in development of psychologists.

It may be difficult to judge unless you have been in both training models. The re specialization students have had this opportunity and some of them flourished under the PsyD model with practicum courses and intensive supervision of clinical skills development that they did not receive in their PhD program. Furthermore some of the students switched from PhD to PsyD program after finishing their MS degree as they were unhappy with emphasis on research in their PhD program, and they too welcomed the practicum and intensive supervision under the PsyD model.

I agree that those factors are of course important, but the thing is, those are the sorts of areas that internship review committees are evaluating when selecting folks to rank for internship spots. Thus, in that way, APPIC match rates can serve as a semi-proxy for those variables. As can EPPP and licensure rates, given that the former at least attempts to evaluate the knowledge gained from curriculum and practica experiences, and the former captures the EPPP + in some cases the ability to practically demonstrate competence via state oral exams (for those states with such exams).
 
If the 30% who failed are scattered across programs relatively randomly or equally, then I can see the argument that failing is NOT a indicator or programs quality. But that's not what we see.

And, I think its relatively common sensical for a test whose content is psychological science and the application of that science (even if that content is rather ridiculous or irrelevant at times, I admit) to be correlated with other agreed upon measures of program quality.

Agreed.
 
This is entertaining.

michael-jackson-eating-popcorn-o.gif
 
  • Like
Reactions: 4 users
The population of students for FSPP and University base program could be one of the outlier statistics affecting the metrics. Many FSPP target non traditional students both PhD and PsyD. I believe Fielding is a PhD clinical psychology program that attracts ministers and active military or reserve military that cannot attend a full time university based program. These students are typically older and have been out of school a number of years. I think Alliant, Union, and Walden have similar non traditional focus and these are PhD and PsyD programs.

So it could be more of student centered factors rather than program quality factors that affects outcome metrics.
 
Last edited:
i have concerns about your reasoning abilities.
 
  • Like
Reactions: 1 users
If your intention with things like this, and silly thesaurused posts, is to sound wise, I have bad news.


From review of the 2010 study, have you collected data on the 15 programs that represented 30% of the unmatched internship applications from 2000 to 2006 since 2006? I was surprised that all but one was not a APA accredited program. Sorta throws a wrench in your conclusions as essentially these are high quality programs with many unmatched students, if you value APA accreditation.

My program did not start internship application until 2008. I have some question related to validity about APPIC statistic, as my program went from 3 applicants in 2008 to 14 last year but one year there were 24 applicants for internship. All of the students who do not match in the APPIC match are placed in a consortium internship or they find and in some cases develop their own internship. As in my case, I matched at an APA accredited site as well as two others in my cohort of 11. However, APPIC statistics have us listed as poor APPIC match rate as many of the students choose to go the consortium route and stay local. How do you not know that this is also how these 15 program are set up that are responsible for 30% of unmatched programs from 2000-2006? In California many choose the CAPIC match rather than APPIC. Did you review those statistics for the California programs with a high unmatched rate for APPIC?

Strange but EPPP pass rate is low for my program but all of the students I know from my program passed the EPPP and oral exams to become licensed.

Reviewing APPIC information about my APA accredited internship has my cohort as all interns being from APA accredited program. How does APPIC derive this demographic information? Is it from DCT report?

Not saying your study is invalid but I question the validity of APPIC information and you may need to get the information from the graduate programs.
 
From review of the 2010 study, have you collected data on the 15 programs that represented 30% of the unmatched internship applications from 2000 to 2006 since 2006? I was surprised that all but one was not a APA accredited program. Sorta throws a wrench in your conclusions as essentially these are high quality programs with many unmatched students, if you value APA accreditation.

My program did not start internship application until 2008. I have some question related to validity about APPIC statistic, as my program went from 3 applicants in 2008 to 14 last year but one year there were 24 applicants for internship. All of the students who do not match in the APPIC match are placed in a consortium internship or they find and in some cases develop their own internship. As in my case, I matched at an APA accredited site as well as two others in my cohort of 11. However, APPIC statistics have us listed as poor APPIC match rate as many of the students choose to go the consortium route and stay local. How do you not know that this is also how these 15 program are set up that are responsible for 30% of unmatched programs from 2000-2006? In California many choose the CAPIC match rather than APPIC. Did you review those statistics for the California programs with a high unmatched rate for APPIC?

Strange but EPPP pass rate is low for my program but all of the students I know from my program passed the EPPP and oral exams to become licensed.

Reviewing APPIC information about my APA accredited internship has my cohort as all interns being from APA accredited program. How does APPIC derive this demographic information? Is it from DCT report?

Not saying your study is invalid but I question the validity of APPIC information and you may need to get the information from the graduate programs.

Did you actually read this study in its entirety. Most all these points are addressed in the discussion.
 
OND, you should read studies before commenting on them. (Really; it's only 4 pages long...)

Burgess et al (2008) cover the "creative interpretations" some programs make with C-20 disclosure data (supported by CoA recent actions to enforce correct reporting of these statistics). Using program data is not appropriate for this kind of research.
 
I read it and as I indicated, I question the validity of the APPIC information and I would want additional information from the programs before making such bold conclusions. Certainly does not bode well for APA accredited program if non APA accredited programs have a higher MATCH rate.

My premise as stated throughout this diatribe with you is that other factors may be responsible rather than generalizing APPIC Match and EPPP pass rate to quality of programs. Heck, I now know it is common for 70% of students to pass the EPPP the first time.

My program is listed as having 67% match rate but I looked at some of the other nearby PhD programs and they had from 50% to 89%.

Why is there such a delay of program data from APPIC? The data from 2011 to 2014 is not listed and my program has had a higher match rate the last four years.
 
I read it and as I indicated, I question the validity of the APPIC information and I would want additional information from the programs before making such bold conclusions. Certainly does not bode well for APA accredited program if non APA accredited programs have a higher MATCH rate.

My premise as stated throughout this diatribe with you is that other factors may be responsible rather than generalizing APPIC Match and EPPP pass rate to quality of programs. Heck, I now know it is common for 70% of students to pass the EPPP the first time.

My program is listed as having 67% match rate but I looked at some of the other nearby PhD programs and they had from 50% to 89%.

Why is there such a delay of program data from APPIC? The data from 2011 to 2014 is not listed and my program has had a higher match rate the last four years.

This is classic "conspiracy theory reasoning", as I have mentioned to you before.

When you show objective/scientific evidence of a lone gunman to a JFK conspiracy nut, all they do is attack the science, the scientists, and/or the data as being in on the conspiracy or otherwise invalid. Its a thought process that is impervious because you will never trust the data or the messenger that presents the opposing view.

I think this only only serves to highlight the importance of training psychologists in scientifically rigorous programs. Otherwise, they apparently turn out to be unable to think straight. If you are really going to be this resistant to data, then I think we can all just stop now.
 
Last edited:
Not saying your study is invalid but I question the validity of APPIC information and you may need to get the information from the graduate programs.

For goodness sake OND, who has more incentive to be "creative" with that data? The programs or APPIC? That a pretty obvious no brainer, man.
 
Last edited:
This is classic "conspiracy theory reasoning", as I have mentioned to you before.

When you show objective/scientific evidence of a lone gunman to a JFK conspiracy nut, all they do is attack the science, the scientists, and/or the data as being in on the conspiracy or otherwise invalid. Its a thought process that is impervious because you will never trust the data or the messenger that presents the opposing view.

I think this only only serves to highlight the importance of training psychologists in scientifically rigorous programs. Otherwise, they apparently turn out to be unable to think straight. If you are really going to be this resistant to data, then I think we can all just stop now.

Hah.... Again making broad generalization! I value scientific rigorous training and what makes you believe PsyD programs are not scientifically rigorous? We did research with four to six Stat/research courses. You need to open the window and look outside before making judgments about science. Inductive and deductive reasoning insist that data should be questioned.

Conspiracy.... Are you serious?
 
I value scientific rigorous training and what makes you believe PsyD programs are not scientifically rigorous?

Because of the faulty thinking illustrated in your posts in this thread.

When you look at this thread, it quite obvious I am not the only one who thinks this either.
 
Because of the faulty thinking illustrated in your posts in this thread.

When you look at this thread, it quite obvious I am not the only one who thinks this either.
:) well my faulty thinking certainly held you interest level today!!
 
Top