Getting a PhD from an online program

This forum made possible through the generous support of SDN members, donors, and sponsors. Thank you.
I don't think you know what an anecdote is.
It seems to me that you are the one confused about what an anecdote is. According to Merriam-Webster an anecdote is "a usually short narrative of an interesting, amusing, or biographical incident." I told no narrative or story of a specific incident. I simply made a general assertion about the state of the job market. (You could argue that I did not provide sufficient evidence but anecdote is not synonymous with an assertion without evidence.) Later, I did provide an anecdote ("anecdotal evidence") when I provided information of a news account of a specific person with a DSW obtaining a specific job.

Members don't see this ad.
 
It seems to me that you are the one confused about what an anecdote is. According to Merriam-Webster an anecdote is "a usually short narrative of an interesting, amusing, or biographical incident." I told no narrative or story of a specific incident. I simply made a general assertion about the state of the job market. (You could argue that I did not provide sufficient evidence but anecdote is not synonymous with an assertion without evidence.) Later, I did provide an anecdote ("anecdotal evidence") when I provided information of a news account of a specific person with a DSW obtaining a specific job.
Pedantry, irony, and proving my point, oh my!

It's almost like I was summarizing your posts without quoting them all in their entirety.
 
Pedantry, irony, and proving my point, oh my!

It's almost like I was summarizing your posts without quoting them all in their entirety.
Goodness. Keyword: "later." The initial comment about anecdotes by WiseNeuro was made before I provided the anecdote that I cited in my explanation to your comment about me not knowing what an anecdote is, so that's far from proving your point. And it's quite rich to have you calling someone out about pedantry when you're incorrectly critiquing semantics while providing little to nothing in the way of useful commentary. Anyway, I hope we can get back to the real productive conversation instead of trying to discredit me.
 
  • Like
Reactions: 1 users
Members don't see this ad :)
That's a great point. I would say the DSW is different. As you'll see in this link, there are Penn DSW grads now teaching as professors at places like USC, NYU, and Temple. Doctorate in Clinical Social Work (DSW) Program Format - Penn SP2 Here (UT assistant vice chancellor for student life to retire) is someone with an online DSW from the University of Tennessee who is now Assistant Vice Chancellor at the University of Tennessee, in charge of overseeing the Director of the Counseling Center (granted that's not a teaching job but being an assistant VC is significant). One important thing to note is that all of these degrees are hybrid, to some extent or another, meaning that they require some visits to campus. Penn and UT require it once a year for 1 week during the summer. The Oregon State program is such that it probably wouldn't be realistic unless you lived in the region as you come on campus for a weekend twice each quarter. Based on my quick search it seems like there may be a few smaller B&M schools offering online degrees in related fields but very, very few major B&M schools, so that's an important point.

I don't super understand what the list of positions in the link is intended to demonstrate. I can show you a photo of a white tiger; that does not mean all tigers are white. Surely you recognize that a few people doing well is not the same as doing well being a modal outcome.
 
  • Like
Reactions: 2 users
I don't super understand what the list of positions in the link is intended to demonstrate. I can show you a photo of a white tiger; that does not mean all tigers are white. Surely you recognize that a few people doing well is not the same as doing well being a modal outcome.
Of course! From my perspective, I'm not the one overgeneralizing. I'm not saying all people with online doctorates will be smashing successes or even most. I'm simply saying that to dismiss all online doctorate programs as useless isn't true or helpful.

Multiple posters (including @Psychmeout) suggested OP look to see what schools may have faculty at B&M schools who obtained online doctorates and whether those schools have online programs thmselves. By showing the individuals on that Penn link, I perhaps gave OP a start on that. It was also important to link because the way some people on this forum spoke they would have you believe: 1) those positions themselves don't exist (lack of acknowledgement of clinical professor role) and 2) it's almost unheard of to find someone with an online doctorate landing a full-time teaching gig at a B&M school. While it may be rare, many of these online programs are fairly new and more of their graduates will likely be filling the teaching ranks in the coming years. This is an important thing to be aware of for someone considering an online doctorate and a perspective that was not previously acknowledged on the thread.

Lastly, it's not hard to see how a doctorate (online or not) from an established B&M school (especially an Ivy League) would be beneficial in a job applications to teach at many schools. I'm not sure why people are having such a hard time with this point. I would encourage folks to ask contacts in schools of social work how they would see an applicant with an online doctorate from an established school like Penn, USC, or the University of Tennessee. I am telling you from the perspective of having worked in a school of social work as faculty at a large school. Again, to be clear all I am telling you all is that online doctorates from a reputable B&M school (of which there are few) can help obtain a full-time teaching position at a B&M school in certain departments, particularly in the field of social work. That should not be considered a controversial statement when you consider that many schools of social work hire professors of practice at the master's level. To reiterate, talking about social work is relevant because OP is an LCSW. Perhaps people are reading more into my point than what I said, but I think I've been quite consistent and clear. [Edit: I'm pretty tired of this, so I likely won't be responding further. Hope I was helpful OP! That's what this is for.]
 
When I'm hungry, I go buy food. I don't rely on someone to stop by and give me a cooked meal. It could happen, but it seems unwise. Encouraging OP to go to grocery store seems appropriate.

What's your investment in this ' online phds can get you where you want to be (academia in psych)' thing? Do you have one?
 
When I'm hungry, I go buy food. I don't rely on someone to stop by and give me a cooked meal. It could happen, but it seems unwise. Encouraging OP to go to grocery store seems appropriate.

What's your investment in this ' online phds can get you where you want to be (academia in psych)' thing? Do you have one?

The OP asked for advice. That's what this forum is for.

Why would it matter if aftermidnight has an online degree or not?
 
The OP asked for advice. That's what this forum is for.

Why would it matter if aftermidnight has an online degree or not?
The OP asked for advice that is not being given. Moreover, there is a clear investment in a specific outcome that runs contrary to what would be considered 'good advice'. Faculty members at universities don't have online degrees. Online degrees are not accepted as quality in psychology (because they are not). The only 'evidence' to support the suggested position is either (1) not within the field of psychology and inappropriate for this forum, (2) not within the field OP wants to work in making it irrelevant to this thread AND this forum, and (3) not what typically (ever?) leads to academic positions.

Listen, you've got academics in this thread saying 'hey, this is what will/will not work to get an academic position'. You also have someone saying 'But what if... but it could if...'. There is a disconnect and, thus, an over-investment in a specific answer.
 
  • Like
Reactions: 1 user
The OP asked for advice. That's what this forum is for.

Why would it matter if aftermidnight has an online degree or not?
How many times have students and graduates of high cost, poor outcome doctoral programs come here to defend their decisions or encourage others to make similar poor choices?

The same would likely hold true for students and graduates of online programs.

These posts rely on anecdotes and edge cases instead of modal experiences and outcomes. They are akin to advising prospective students to gamble their futures on long-shot outliers.
 
This will seem like a silly discussion in 10 years.

And not because we don't already have the technology to change things in an instant and do things more efficiently/better, but because education hasn't changed very much from the very start. We've figured out so many more complicated things..but we haven't been able to figure out how to teach except a teacher standing in-front of a room lecturing.

Things will change. We just need some of the older folks to die out.
 
Education is not limited to the classroom. I would argue (and have in the past) that the majority of doctoral training occurs outside of the classroom: research labs, collaborating with students in your cohort, practica training, etc.

When I say "outside", I don't mean working "independently", which seems to be a straw man that online programs push. Instead, I'm talking about the learning that occurs with your mentor or with a senior person where they are actively engaged with you.

*edited to clarify*
 
  • Like
Reactions: 2 users
Education is not limited to the classroom. I would argue (and have in the past) that the majority of doctoral training occurs outside of the classroom: research labs, collaborating with students in your cohort, practica training, etc.
Those elements have to stay. Research still has to be done at a lab, you still have to do pratica/internships, but there is no reason to not give people flexibility in terms of course content and/or where they can do that lab work/practica.
 
How many times have students and graduates of high cost, poor outcome doctoral programs come here to defend their decisions or encourage others to make similar poor choices?

The same would likely hold true for students and graduates of online programs.

These posts rely on anecdotes and edge cases instead of modal experiences and outcomes. They are akin to advising prospective students to gamble their futures on long-shot outliers.

I can't answer that because I don't know and haven't seen online programs defended in here before now. I personally would like to hear their perspectives because I don't know anyone who graduated from an online program, so I think it could add to the discussion.

My advice was to find out about alumni of the OP's school to see if they had success in teaching positions and to find out how folks in here fared after graduating with an online degree. That is a fair barometer of whether the OP can meet the teaching career goal.

Generally, I agree with you and others about online programs. However, we can offer advice without shaming people in the process. Asking aftermidnight directly if he/she is from an online program after several people speak derisively about the online programs and aftermidnight disagrees with you....interesting choice.

I can see why graduates of online programs wouldn't want to speak up.
 
Members don't see this ad :)
However, we can offer advice without shaming people in the process.

I strongly disagree with this. There is a very big difference between shame, guilt, and embarrassment. The literature consistently indicates that a healthy response to shame is to fix the pointed out discordance between one's actions and one's idealized self.
 
As an aside, I enjoy when people who don't have doctorates in psychology explain what the training should look like.
I enjoy people being bitter because they had to do an extra 4-5 years of schooling. :D
 
I strongly disagree with this. There is a very big difference between shame, guilt, and embarrassment. The literature consistently indicates that a healthy response to shame is to fix the pointed out discordance between one's actions and one's idealized self.

Are you saying that putting down other groups of people in here is perfectly reasonable if they disagree with you?
 
Are you saying that putting down other groups of people in here is perfectly reasonable if they disagree with you?

Short answer: No. Everyone should be civil. But the only person responsible for one's reaction here is their own.

Longer answer: There is a difference between logical discourse and one's response thereof. Everyone is free to their opinions. However, not all opinions share the same degree of veracity. Ad hominems, being formal errors of logic, would demonstrate that the attacker's position is poorly thought out. Placing the locus of control on others to an internal response is inconsistent with the literature. The literature indicates that shame results from a public depiction of a discordance between one's idealized self and one's actions. The literature also describes that a healthy reaction to shame is to take action towards resolving the discordance; while an unhealthy/immature reaction to shame is to self aggrandize or attempt to discount the other.
 
  • Like
Reactions: 1 users
Not that at all- its just a great example of how inane the counter-argument is.
I have no real skin in this game. I'm a licensed Psychologist in Canada at the Masters level. No new legislation is going to impact me. The only thing that I care about is that the provinces/states retrain their right to dictate the licensing standards...and that they don't fall to the pressure of the APA/CPA. This is for two reasons. 1. People aren't exactly jumping at the opportunity to move to Alabama, Wyoming or Manitoba to practice. They need flexibility. 2. I've seen no proof that additional training (beyond a certain point) produces significantly better clinicians.

In regards to # 2

This is from Science and Pseudoscience in Clinical Psychology...(taking some excerpts out)

Clinical lore suggests that psychologists and mental health professionals learn from experience by working with clients in clinical settings. Experienced clinicians are presumed to make more accurate and valid assessments of personality personality and psychopathology than less experienced graduate students and mental health providers. Similarly, presumed experts are assumed to be more competent providers of psychological interventions than other clinicians. Psychology training programs adhere to these assumptions, and common supervisory practices emphasize the value of experience in the development of competent clinicians. The inherent message to mental health trainees is that clinical acumen develops over time and with increased exposure to various clients and presenting problems.

Narrative reviews of clinical judgment have concluded that when clinicians are given identical sets of information, experienced clinicians are no more accurate than less experienced clinicians and graduate students, though they may be better at structuring judgment tasks (e.g., generating questions during an interview; Dawes, 1994; Garb, 1989, 1998, 2005; Garb & Schramke, 1996; Goldberg, 1968; Tracey, Wampold, Lichtenberg, & Goodyear, 2014; Wiggins, 1973; see also Meehl, 1997).

1997). Similarly, a recent meta-analysis (Spengler et al.,2009) found only a small positive effect for training and experience.

The authors synthesized results from 75 clinical judgment studies. A finding they emphasized is that the combined effect of training and experience was small but positive (d = 0.12; this is equivalent to a correlation of about r = 0.06). different. Also, Spengler et al. concluded that having specific training and experience with a judgment task was unrelated to validity.

Experienced versus Less Experienced Clinicians

In conclusion, when clinicians are given identical sets of information, experienced clinicians are generally no more accurate than less experienced clinicians. When practitioners are required to search for information or decide what judgments should be made, experience may be related to validity for some judgment tasks.

Clinicians versus Trainees

Results have been no more promising when clinicians have been compared to trainees. In one study (Hannan et al., 2005; also see Whipple & Lambert, 2011, for additional details), 20 trainees and 20 licensed professionals at a university outpatient clinic were instructed to predict outcomes for clients they were seeing in counseling. In particular, they were instructed to predict which of their clients would be worse off at the end of treatment. Forty of 550 patients deteriorated by the end of treatment (as measured by the Outcome Questionnaire–45 [OQ-45]; Lambert, 2004). Only 3 of the 550 clients had been predicted by
their therapist to leave treatment worse off than when they began (one of the three predictions was correct). The experienced therapists did not identify a single client who had deteriorated.


Clinicians versus Graduate Students

Studies have revealed no differences in accuracy between experienced clinicians and graduate students when judgments are made on the basis of interview data (Anthony, 1968; Schinka & Sines, 1974), biographical and history information (Oskamp, 1965; Witteman & van den Bercken, 2007), behavioral observation data (Garner & Smith, 1976; Walker & Lewine, 1990), data from therapy sessions (Brenner & Howard, 1976), MMPI protocols (Chandler, 1970; Danet, 1965; Goldberg, 1965, 1968; Graham, 1967, 1971; Oskamp, 1962; Walters

Walters et al., 1988; Whitehead, 1985), projective-drawing protocols (Levenberg, 1975; Schaeffer, 1964; Stricker, 1967), Rorschach protocols (Gadol, 1969; Turner, 1966; Whitehead, 1985; see also Hunsley, Lee, Wood, & Taylor, Chapter 3, this volume), screening instruments for detecting neurological impairment (Goldberg, 1959; Leli & Filskov, 1981, 1984; Robiner, 1978), and all of the data that clinical and counseling psychologists usually have available in clinical practice (Johnston & McNeal, 1967).

Clinicians and Graduate Students versus Lay Judges

When given psychometric data, clinicians and graduate students were more
accurate than lay judges (e.g., undergraduates, secretaries) depending on the type of test data. Psychologists were not more accurate than lay judges when they were given results from projective tests, including results from the Rorschach Inkblot Method and Human Figure Drawings (Cressen, 1975; Gadol, 1969; Hiler & Nesvig, 1965; Levenberg, 1975; Schaeffer, 1964; Walker & Linden, 1967). Nor were clinical psychologists more accurate than lay judges when the task was to use screening instruments (e.g., the Bender–Gestalt test) to detect neurological impairment (Goldberg, 1959; Leli & Filskov, 1981, 1984; Nadler, Fink,


Shontz, & Brink, 1959; Robiner, 1978). For example, in one of these studies (Goldberg, 1959), clinical psychologists were not more accurate than their own secretaries. Finally, when given MMPI protocols, psychologists and graduate students were more accurate than lay judges (Aronson & Akamatsu, 1981; Goldberg, 1968; Oskamp, 1962). For example, Aronson and Akamatsu (1981) compared the ability of graduate and undergraduate students to perform Q-sorts to describe the personality characteristics of patients with psychiatric conditions on the basis of MMPI protocols. Students’ level of training differed in
that graduate students had taken coursework in the MMPI and had some experience administering and/or interpreting the instrument, whereas undergraduates had only attended two lectures on the MMPI. Criterion ratings were based on family and patient interviews. Correlations between judges’ ratings and criterion ratings were .44 and .24 for graduate and undergraduate students’ ratings, respectively. Graduate student ratings were significantly more accurate than undergraduate ratings.

Scott O. Lilienfeld, Steven Jay Lynn, Jeffrey M. Lohr. Science and Pseudoscience in Clinical Psychology, Second Edition (p. 1). Guilford Publications. Kindle Edition.
 
Short answer: No. Everyone should be civil. But the only person responsible for one's reaction here is their own.

Longer answer: There is a difference between logical discourse and one's response thereof. Everyone is free to their opinions. However, not all opinions share the same degree of veracity. Ad hominems, being formal errors of logic, would demonstrate that the attacker's position is poorly thought out. Placing the locus of control on others to an internal response is inconsistent with the literature. The literature indicates that shame results from a public depiction of a discordance between one's idealized self and one's actions. The literature also describes that a healthy reaction to shame is to take action towards resolving the discordance; while an unhealthy/immature reaction to shame is to self aggrandize or attempt to discount the other.

Sure, I get the ad hominem point in terms of debate--attacking is problematic, and I agree that reactions are the responsibility of the people reacting. I'm just not sure where the "attack" occurred that you're referring to?

I do want to note that making process comments (about choice of words, patterns/trends, and communication styles) in conversations isn't an ad hominem. Under that argument, any time anyone reflected on the dynamics of the conversation itself, it would be construed as an attack.
 
Just attach copies of all those articles to your CV
 
  • Like
Reactions: 1 users
Alex, I'll take confirmation bias for a $1000.

I sure hope the daily double is about praticing outside one's scope!
 
Sure, I get the ad hominem point in terms of debate--attacking is problematic, and I agree that reactions are the responsibility of the people reacting. I'm just not sure where the "attack" occurred that you're referring to?

I do want to note that making process comments (about choice of words, patterns/trends, and communication styles) in conversations isn't an ad hominem. Under that argument, any time anyone reflected on the dynamics of the conversation itself, it would be construed as an attack.

I was conflating "putting down" with "attacking". Maybe I got that wrong. Seemed both were changing the subject to the person/actor from the action.
 
I enjoy people being bitter because they had to do an extra 4-5 years of schooling. :D

Bottom line: It's malpractice if you treat people & don't know what you're doing, and you try to do it anyhow (or compromise the patient's care b/c you don't know what you're doing). Moreso, if some school hires you to teach others what you don't know how to do.

That 4-5 years can really come in handy when you're practicing clinical work, assessment, interventions, and research.

Maybe everyone's feeling chatty, but I'm wondering why this thread is going on as long as it has. An online clinical doctorate degree is a waste of time and money (IMO) because the experiential learning is compromised. Sure...we can lower our standards, but it compromises one's integrity, reliability and validity (need a non-clinical example? Just look at the political atmosphere in the U.S. to confirm potential dastardly outcomes of lowering standards).o_O
 
Last edited:
  • Like
Reactions: 1 users
Bottom line: It's malpractice if you treat people & don't know what you're doing, and you try to do it anyhow (or compromise the patient's care b/c you don't know what you're doing). Moreso, if some school hires you to teach others what you don't know how to do.

I've posted research backing up my claims..you've said some words that don't mean anything.
 
Maybe everyone's feeling chatty, but I'm wondering why this thread is going on as long as it has. An online clinical doctorate degree is a waste of time and money (IMO) because the experiential learning is compromised. Sure...we can lower our standards, but it compromises one's integrity, reliability and validity (need a non-clinical example? Just look at the political atmosphere in the U.S. to confirm potential dastardly outcomes of lowering standards).o_O

Maybe the thread is here because there is free speech and people are allowed to disagree? Maybe not in your world.

The facts state that you wasted 4-5 years of life in schooling, and in many aspects, are no better in assessment than a random person off the street.
 
Shontz, & Brink, 1959; Robiner, 1978). For example, in one of these studies (Goldberg, 1959), clinical psychologists were not more accurate than their own secretaries. Finally, when given MMPI protocols, psychologists and graduate students were more accurate than lay judges (Aronson & Akamatsu, 1981; Goldberg, 1968; Oskamp, 1962). For example, Aronson and Akamatsu (1981) compared the ability of graduate and undergraduate students to perform Q-sorts to describe the personality characteristics of patients with psychiatric conditions on the basis of MMPI protocols. Students’ level of training differed in that graduate students had taken coursework in the MMPI and had some experience administering and/or interpreting the instrument, whereas undergraduates had only attended two lectures on the MMPI. Criterion ratings were based on family and patient interviews. Correlations between judges’ ratings and criterion ratings were .44 and .24 for graduate and undergraduate students’ ratings, respectively. Graduate student ratings were significantly more accurate than undergraduate ratings.

I did look up the original 1959 paper you cherry-picked,* and this line was priceless: "To increase the judges' involvement in this diagnostic task, a bottle of Scotch was offered to the judge who performed most accurately."

Maybe the problem is we're not sufficiently incentivized with booze. :laugh:

* The study compared psychologists (n=4), trainees (n=10), and non-psychologists (n=8) on their "accuracy" in diagnosing cortical damage when given nothing more than the results of a single test (Bender Visual-Motor Gestalt Test). The experimenters pulled completed protocols from 30 patients (15 with brain damage and 15 without). The test was administered to the non-psychologists beforehand so that they would know what it's like to take the test. When using the test results alone (i.e., impressions of drawings) the three groups performed comparably, correcting diagnosing about 2/3 of patients with "organic brain damage." The non-psychologists were significantly more confident in their diagnoses than trainees or psychologists. As a final note, the experimenters recruited "one of the country's foremost authorities on the Bender test" and he outperformed them all, with 83% correctly classified from test results alone.
 
  • Like
Reactions: 2 users
And suddenly, no recruitment problem.

The real question, how did they ONLY recruit that many folks?

My fantasy is that it started with a friendly bet on a slow afternoon. That's a better premise than "let's prove that trying to diagnose brain injury by looking at people's drawings alone would be a bad use of a psychologist's time."
 
Sorely tempted to submit a replication study to my IRB. Just to see the response.
 
  • Like
Reactions: 3 users
You guys have nothing.

This is sad now.
 
We have doctorates.


In all seriousness, I just don't have the energy to explain why you aren't competent at everything just because you have a masters degree. You don't seem to get it when people have, why would this time be any different?
 
  • Like
Reactions: 2 users
We have doctorates.


In all seriousness, I just don't have the energy to explain why you aren't competent at everything just because you have a masters degree. You don't seem to get it when people have, why would this time be any different?

You have plenty of energy to make posts that have no research behind them. Plenty of time to make empty statements. Plenty of time to deflect.

You guys have asked me for research to back up my assertions, you then get it, and all you can do is deflect.

I don't know who you think you're trying to fool.
 
I did look up the original 1959 paper you cherry-picked,* and this line was priceless: "To increase the judges' involvement in this diagnostic task, a bottle of Scotch was offered to the judge who performed most accurately."

Maybe the problem is we're not sufficiently incentivized with booze. :laugh:

* The study compared psychologists (n=4), trainees (n=10), and non-psychologists (n=8) on their "accuracy" in diagnosing cortical damage when given nothing more than the results of a single test (Bender Visual-Motor Gestalt Test). The experimenters pulled completed protocols from 30 patients (15 with brain damage and 15 without). The test was administered to the non-psychologists beforehand so that they would know what it's like to take the test. When using the test results alone (i.e., impressions of drawings) the three groups performed comparably, correcting diagnosing about 2/3 of patients with "organic brain damage." The non-psychologists were significantly more confident in their diagnoses than trainees or psychologists. As a final note, the experimenters recruited "one of the country's foremost authorities on the Bender test" and he outperformed them all, with 83% correctly classified from test results alone.

Have you read the book?

Cherry-picked?

I literally quoted the summaries of all the studies that these authors thought were directly related to the question of assessment validity. Before these summaries, they wrote about past research that included studies that were not necessarily related to the validity of assessment.

I quoted their conclusions after they want through the work of dictating what studies could be and should be included in the assessment.

Many of those studies are recent.
 
This debate (master's vs. doctorate) is one I see come up often.

To be fair, doctoral graduates aren't considered competent in every domain available practice-wise either. That's what consultation, supervision, and workshops are for, which are available to master's grads and doctorate-holders.

However, having pursued a master's degree at a different university than my doctoral program, I'm concerned that they considered me ready for practice after so little training and clinical work in the former. It required ONE semester of clinical practice. One semester and I graduated. That program was terrible for preparing practitioners.

Having said that, there are great therapists/practitioners at both levels and crappy ones at both levels. Good programs can also graduate bad practitioners. I think overall skill level coming in and ability to further develop skills as well as openness to continued growth/self-reflection are crucial for "good" therapists, regardless of level of education. If you feel like you know all there is to know about everything in psychotherapy and yourself and can stop learning/growing/self-reflecting (I.e. hubris), to me that's a huge red flag in a practitioner at any stage of practice.

Ultimately, I think quality and quantity of training matter, but other factors do too (ability to form a strong alliance/interpersonal skills, adherence, etc. and the other factors discussed in Wampold's Great Psychotherapy Debate if we're talking therapy outcomes). It isn't necessarily just the type of degree that matters.
 
Last edited:
  • Like
Reactions: 1 user
Have you read the book?

Cherry-picked?

I literally quoted the summaries of all the studies that these authors thought were directly related to the question of assessment validity. Before these summaries, they wrote about past research that included studies that were not necessarily related to the validity of assessment.

I quoted their conclusions after they want through the work of dictating what studies could be and should be included in the assessment.

Many of those studies are recent.

No, I didn't read the book. You highlighted in bold a phrase from the book stating that psychologists were no better than their secretaries at assessment. Rather than take this at face value, I pulled up the original published study from 1959. The research question is silly. Or at least it is in 2017. The study has no external validity because literally no one uses tests this way.

This is a roadmap for evaluating all such claims. What I did took maybe 10 minutes (full disclosure: I have access to PsycARTICLES). Go look up the work, really understand it for yourself, and you'll find it harder to make such bold generalizations. Plus, you might get a chuckle about how people used to recruit participants.
 
Oh my dear, such conflict! And on the internet, which is usually such a bastion of civility!

Based on the original post, the OP clearly has enough clinical training and the licenses/degrees required to practice in their area of training. Their stated goals were:
1) "expand my knowledge base"
2) "obtain a collegiate teaching position"
3) "continue in practice"

#1 you can certainly do with an online degree, a B&M degree, taking courses on an ad-hoc basis, doing online MMOC courses, attending workshops, etc. This training goal doesn't require a specific set of initials at the end of it, and can be accomplished a variety of ways. If you want to do this, it's up to you to research whether specific programs give you the type of expertise and knowledge you want.

#3 is basically the same - you're already practicing, so no reason more or different training would preclude you from continuing to practice.

#2 is tricky. I can see three possibilities: you want a teaching position in a clinically-oriented program that is not doctoral (LCSW, MSW, etc), you want a teaching position in a doctoral clinically-oriented program (clinical/counseling/school psychology PhD programs), or you want to teach in a program that isn't tied directly to clinical practice/licensure (say a developmental psychology master's program). The first two scenarios have, I think, been addressed by other posters - if you want to teach in an MSW program and you've got an MSW, getting a PhD in a non-clinical but related field probably won't give you too much of a leg up to be worth the cost. And if you want to teach in a PhD program that is or could be accredited by APA/CPA, you will definitely run into the view that online programs aren't great, because those PhD programs require you to have lots of clinical experiences during training, and it's hard-to-impossible to do that through an online program. Plus, getting a non-clinical PhD isn't likely to boost your odds of being able to teach in a clinical PhD program regardless.

Now, if you want to teach in a program that ISN'T tied to clinical practice (like a developmental master's program), then such additional training could, maybe, help you get a teaching job, but the folks in this forum aren't going to be the best people to help you answer that question, because pretty much everyone in the forum is in a clinical field. So, to be honest, I don't know if programs that do graduate training in psychology but are non-clinical would like you more if you have a PhD from an online school, compared to your current levels of expertise/training or compared to a PhD from a B&M school. Your best bet, if you want to pursue this route, is to look up the types of programs you'd want to work in, look at their faculty listings, and see where their degrees are from and what type of degrees they have. That'll give you a better sense of whether what YOU want to do specifically is possible through an online doctorate.

If relevant, I know that my clinical psych grad program would occasionally have practitioners from the community come in to teach a specific course on an adjunct basis on their area of expertise. In our case, these folks were typically PhDs, but it wouldn't shock me if you could adjunct for a specific course without a PhD. For example, if you have a specialty certification in sex offender treatment, as you listed above, perhaps you could reach out to a nearby university to see if there is interest in their program in having someone be able to provide a course on treatment in forensic settings more generally? That might get you to the same outcome with less financial and time costs to you.

Good luck, OP!
 
  • Like
Reactions: 1 user
No, I didn't read the book.

I have read the book. I am not in disagreement that when it comes to most psychological studies, there will be questions about methodology. We could sit here day and night and hash it out. But I did not cherry pick the book.

I posted the summaries from the authors after they had in their estimation selected the studies that in their view were directly related to validity of psychological assessment from the ones that were not. Cherry picking would have been selecting some of the research they discussed before the part I quoted, because in their estimation, the other studies were not directly related to the question of validity of assessment but had been used in the past as an argument against Psychologists.
 
I have read the book. I have friends who worked with him in research labs at emory. Your extrapolation that it supports less training/less rigor in psychology as a legitimate thing is a misinterpettion of the thesis of his work (that book or ANY of his work). If that's what you got from his work, you need to read his stuff again. Several times.
 
  • Like
Reactions: 3 users
I have read the book. I have friends who worked with him in research labs at emory. Your extrapolation that it supports less training/less rigor in psychology as a legitimate thing is a misinterpettion of the thesis of his work (that book or ANY of his work). If that's what you got from his work, you need to read his stuff again. Several times.
na
 
I have read the book. I have friends who worked with him in research labs at emory. Your extrapolation that it supports less training/less rigor in psychology as a legitimate thing is a misinterpettion of the thesis of his work (that book or ANY of his work). If that's what you got from his work, you need to read his stuff again. Several times.

That is not my interpretation of his work.

His thesis does not support less training/less rigor, but it supports the notion that at least currently, the extra years are not making any difference in terms of better clinical judgement. And this has always made sense to me..because many of the assessments themselves are pure ****. Only the cognitive/neuro tests give me any real confidence. I love the WISC. I love a lot of the instruments/measures neuropsychologists use. It makes sense why lay people would come up with the same diagnosis as a psychologist when self-report instruments come into play.
 
Last edited:
Sigh. This is why I just insert haphazard sarcasm when I talk to you instead of respond with any depth.

Ok, from the top.
1. You are conflating research on 'clinical outcomes' with 'clinical judgement'. The later focused more on the work of Meehl and actuarial assessment practices outperforming clinician intuition. The former focuses on mechanisms of change or competence discussions. If you want to talk about actuarial methods, then you are going to have a hard time arguing that you have sufficient training to demonstrate competence in assessment in the, what, one (?) class you took in your masters degree? Since my primary area is psychological assessment, I would enjoy laughing at that 'debate'. Saying you see value in well-validated actuarial methods and not clinical judgement (a view supported by the book you are citing) and then saying assessment is meaningless is beyond confusing If you want to talk about the later, at least use the correct term and fake like you understand the literature. Either way, stop conflating arguments.

2. You missed the context of what his argument was. It wasn't that 'more training means nothing for outcomes', it was that people are prone to adapt poorly scientifically validated practices based on practicum and clinical supervision. He wants stronger didactic coursework instead of relying on people going into the field with insufficient training and then getting 'supervision' from which they use their 'clinical judgement' to ensure they are being good clinicians. This actually works against your argument and, as best I can tell from what you've said, your career path. What we know (and what he mentions, you can go to page 23 if you're following along in the text) is that specialized training results in people being able to out perform others with non-specialized training. Let me say that again. Specialized training (e.g., more intensive graduate training) makes you better through diacritics and coursework NOT clinical experience. Or, as he says "Obviously, longitudinal results that show that graduate students become more accurate after didactic training but not after training at a practicum site also suggest that training is of value but that it can be difficult to learn from clinical experience (p. 24). He gives the example of neuropsychologists. He also gives an example of forensic psychologists.

3. To keep this thread topical, the entire point is that rigor in training programs is needed. This is where you came in to chime on and on about how online programs were going to replace B&M and that there are no differences in outcomes. This morphed into 'training doesn't make better clinicians' yet by your same sacred source, the argument for rigor is the most important. Traditional training programs are better than online programs for this reason.

Anyway, I've wasted enough time with this. Stay gold ponyboy.
 
Last edited:
  • Like
Reactions: 5 users
2. You missed the context of what his argument was. It wasn't that 'more training means nothing for outcomes', it was that people are prone to adapt poorly scientifically validated practices based on practicum and clinical supervision. He wants stronger didactic coursework instead of relying on people going into the field with insufficient training and then getting 'supervision' from which they use their 'clinical judgement' to ensure they are being good clinicians.

I get that. But there is clearly a problem when people spend 4yrs in a Psych BA/BSC, then between 4 to 6 years in a Phd program..doing a combo of coursework, research, and clinical experience..and the conclusion is there is not enough 'rigor'. At some point you have to start questioning the utility of some of the instruments being taught, and start questioning why in some areas of psychology (generalist clinical psychologists) seem more prone to using their intuition. I'm not disagreeing that relying on instruments is almost always better than using clinical intuition, but obviously that is even more the case when the instruments are better (ie neuropsych).

Could it be possible that some psychologists are using 'poor practices' because they don't believe in the instruments they use (ie self-report measures)...is it possible that they perceive that they come to the same conclusions by using the instruments and not using them? that they are unwilling to waste their time/money using the instrument when it appears pointless? Is it possible that when neuropsych use their instruments, they actually find major utility and discover things that be impossible with just intuition?
 
This actually works against your argument and, as best I can tell from what you've said, your career path.
I had 4 assessment based courses. Psychopathology course, academic/language assessment, cognitive assessment, behavioral/social assessment, practica for all those areas, internship, etc
 
Could it be possible that some psychologists are using 'poor practices' because they don't believe in the instruments they use (ie self-report measures)...is it possible that they perceive that they come to the same conclusions by using the instruments and not using them?
They could also be poorly trained, so they don't use the instruments properly. Establishing and vigorously supporting only proven training and standards can guard against poor training. There are already too many sub-par programs and too many psychologists (albeit not evenly distributed across the country), so the worst programs should be cut.
 
Top