From a practical standpoint you may be right, but I don't envision great outcomes for people that don't take stats coursework in graduate school and only go with self-study. There is still a component of "you don't know what you don't know" that I think gets hashed out (ideally) through the process of graduate school. There might be some really gifted students out there that can pick it up pretty easily, but I wouldn't trust their capabilities as a psychologist if there were not some metric for them demonstrating some minimal level of understanding. If you don't take the initiative to learn how to do basic statistics, regression, multivariate, factor, SEM, HLM, then how would you learn how to evaluate the quality of a paper that discusses things you might want to apply in your clinical practice?
My experience with stats courses again, was mostly applied and we used our own datasets. I don't think I would have become as strong of a researcher without that foundation. If the classes were crap then maybe I would feel different. I do disagree with your idea of not knowing enough from a course to publish something in an area - my experience was a bit of the opposite. Aside from the fact that clinical journals sometimes have crappy stats in their papers, it's not like the papers are also reporting on all of their assumptions, etc. I've just seen too much crap both published or as a reviewer to really have that much faith.