@psych844 , I had a bit of trouble discerning because of the ridiculous amount of off-topic criticism in this thread, but this seems to be one of the more important points that you are getting stuck on:
This is more of a problem with the disability system than psychologists' best practices, IMO. In many cases (with notable exceptions, e.g., forensic or disability claims-specific evals), it is just plain not the job of a psychologist to say with 100% accuracy whether someone is lying/exaggerating their symptoms. This is also true in medical evaluations in other specialties and is not at all unique to mental health. The goal is usually to help people, and we cannot focus all of our attention on whether or not someone is lying all of the time. However, regarding cases when it IS the specific job of the psychologist to help determine the validity of symptoms, sometimes they are not even able to use SVTs (or if they do, the SSA doesn't care and doesn't use this to determine disability status). For example, I know there has been a push over the past few years in the U.S. for the SSA to include SVTs in disability evaluations because right now THEY do not want them (or don't want to pay for them) in the psychological evaluations that they request. This may have changed or been updated recently, but I'd suggest you google this if you are interested. Point being, how the heck are psychologists supposed to prevent disability claims fraud if the govt doesn't even want us to use basic tools to help rule-out malingering? The burden cannot fall on psychologists completely when the system itself is partially broken. I'm sure other posters on this forum can speak to these issues more accurately than me though, so if others would like to offer clarification or dissenting opinions, please do so.
Re: objective tests vs clinical interviewing: Validity is an interesting and integral part of diagnosis (hence the use/interest in SVT/PVTs). However, these are tools that merely help to add to the clinical picture (perhaps a degree of certainty) in specific situations. These tests and other diagnostic indicators (objective testing, etc., whatever) are never 100% accurate. As erg has repeatedly reminded you, very little in this field ever is because of the enormous interactional complexity of what we are dealing with. Rather than continue to frame this as something to criticize, why not see it as a strength? We are studying and helping to treat incredibly complex and difficult things that will ALWAYS involve a degree of uncertainty. As a new student, this can be difficult to understand or accept, but I urge you to try to see this aspect of our field as a strength (e.g., the importance of a clinical interview, for instance, as being a more holistic approach to assessment) rather than a weakness due to a lack of "certainty" with "objective" measures. This admiration for "objective" testing is in itself flawed/dangerous because a lack of appreciation for ANY test's level of uncertainty can easily lead to over-reliance or misuse.
I hope you can learn to use the scientific skepticism that you seem to hold in such high regard in a more productive/proactive way. Right now it appears to be more of a hindrance to your professional development as evidenced by the attitude in your posts. I hope you can turn this around, as it will make any potential future studies in this field so much more eye-opening and beneficial.