What this myth highlights is a lack of understanding regarding an evaluation of competency. The evaluation is, legally speaking, simply an opinion (though an informed, expert one). Ultimately, the psychologist is not the decision-maker: the judge is. The psychologist does much of the work in terms of assessment, information gathering, education (in terms of the mental health-related issues), but the judge answers the legal question.
Now, in practice, the report provided to the court by the evaluator is generally the prime piece of evidence considered by the judge when making his or her decision. But, the judge is not obligated to accept the opinion of the evaluator. Further, the findings of the evaluator are subject to challenge by all relevant parties. The evaluator may be called to testify, and will often be required to discuss issues such as methods, diagnostic considerations, or even qualifications. Even if all of that is found to be adequate, the conclusions may still be questioned. Ultimately, even if the entire report is sound, the judge can still deviate. This, of course, will most likely happen in the event that more than one evaluation was conducted, with differing opinions.
Recent criticism, however, has come from the other direction: that judge’s are “rubber stamping” evaluations. Many studies of the correlations between the evaluator’s opinions and the judge’s decisions re: competency have found, at times, agreement at or above 90%. Some criticize this as judge’s abdicating their responsibility as the arbiter of competency, and I’m sure that in some cases, judges would prefer to simply go with what the expert states. I don’t really see much of a problem with this, as long as the expert actually knows what they are doing. But, as I cited in the last myth post, I have personally seen examples of psychologists completing competency evaluations who don’t know a damned thing about competency.
However, I do think that is changing. I think that the procedures evaluators are using are getting significantly better, both through improved education (and specializing in forensics), as well as improved techniques (such as the development of tests and structured interviews that provide extremely useful data when assessing a defendant’s competence). This improvement likely accounts for the significant increase in agreement between the evaluators and the judges. From personal experience, most competency evaluations are actually pretty easy calls, one way or the other. The “in betweeners” can be challenging, but fortunately, they are infrequent.