I recently read an article published in the August 2008 edition of Psychology, Public Policy and the Law, entitled "Clinician Variation in Findings of Competence to Stand Trial" (find abstract here). The authors of the study (Daniel Murrie, Marcus Boccaccini, Patricia Zapf, Janet Warren, and Craig Henderson) note that while most research indicates 20-30% of defendants evaluated for competency to stand trial are found incompetent, there has been little research examining individual patterns with respect to forensic evaluators. That is, do most evaluators tend to cluster within the 20-30% finding, or is there a large spread with lots of outliers. As the authors state, this is extremely important for the integrity of the legal process. While issues related to mental health as it applies to the law can be quite complex, there should be at least a fair amount of reliability associated with the process of evaluating defendants. The purpose of evaluating a defendant for competency to stand trial is to ensure that the defendant is receiving a fair trial, and to reduce the possibility of an erroneous legal finding. As such, the concept of competency is defined, and there should not be excessive variability in findings. Obviously, in the more difficult and complex cases there may be disagreements, but as the article notes, what we don’t want is the result of an evaluation to boil down to who is doing the examining. The authors review the research to this point, which indicates that, generally speaking, overall findings regarding competency are reliable. Still, the lack of data regarding individual variability prompted their study.
In this study, the authors examined the findings of competency evaluations in two states: Virginia (55 clinicians total) and Alabama (5 total), with a different research focus for each group. They discussed the professionals allowed to conduct evaluations (psychiatrists, psychologists, and social workers, primarily), and they reported other pertinent facets as well. For example, both states require completion of a specialized training course prior to be allowed to conduct these evaluations. In addition, the authors limited the evaluators examined in both states to those who had completed at least 20 evaluations, in order to prevent small sample sizes from skewing the results (as well as ensuring that the evaluators being reviewed had sufficient experience in this area, and to allow for an examination of patterns for individuals).
Without going into extensive detail regarding the statistics used, I’ll just point out a few of the findings I found interesting. Within the Virginia sample, there appeared to be significant variability with respect to individual clinician findings. The authors note that almost half of the evaluators had rates of incompetence findings within 10-30%, but that means more than half of the clinicians found defendants incompetent at a rate of either below 10% (18 out of 55, or 32.7%), or above 30% (18.2%). Even when the sample was reduced to clinicians who’ve completed over 100 evaluations, there was considerable variation. In particular, one evaluator who conducted 20 evaluations did not have a single finding of incompetency, while three evaluators had rated higher than 50% (all with 20+ evaluations conducted.
The article also examines 15 evaluators in Virginia who had each completed at least 100 evaluations. The mean rate of this group was 16.1%, but there was also variability within this more experienced group. Three of these evaluators had rates below 7%, while three others had rates above 25%. One evaluator with over 300 evaluations completed had found only 4.2% of defendants incompetent.
The Virginia sample also revealed differences in competency rates based on the evaluator’s profession. Specifically, four social workers were included in the study; their rate for finding incompetence was 46.1%). Conversely, nine psychiatrists were in the study: 7 had rates below 8%, 1 had a rate of 20%, and 1 had a rate of 62.5%. Psychologists fell in between these two other groups, with a mean rate of 20%.
The authors statistically examined the variance in the sample, in order to assess whether/how much of the differences between these various evaluators (and their rates of finding incompetence versus competence) was due to the evaluators themselves, and not due to other sources of variance. The authors note:
"These other sources of variance might include differences in evaluator training, methods used to conduct competence evaluations, party requesting the evaluation (prosecution or defense), individual differences among the defendants who were evaluated, or other sources of systematic or random error. An ideal study would be designed so that all of these potential sources of error could be estimated."
Statistical analyses indicated a significant amount of variance in these differing rates due to differences among the evaluators, above and beyond the other sources noted above. The proportion of the variance attributable to evaluators was calculated to be 12.1%.
In another model, evaluator profession was found to be a statistically significant predictor of competence/incompetence findings. Social workers were found to be 3.51 times more likely to find a defendant incompetent than a psychologist; psychologists were 2.04 times more likely to find a defendant incompetent than a psychiatrist.
Another finding in the article was that the presence of psychosis in the defendant’s presentation, in that it appeared that at least some evaluators were equating psychosis with incompetence. Due to the specifics of the available data, the authors examined the findings of evaluators who had completed at least 100 evaluations. They found that with the exception of the two Virginia psychiatrists, all evaluators found defendants with a psychotic disorder incompetent at a rate higher than 25%. On average, the 11 Virginia psychologists found 39.4% of defendants with a psychotic disorder incompetent. In Alabama, both evaluators found defendants with a psychotic disorder incompetent at a rate higher than 50%.
The authors note multiple possibilities to further explain some of the discrepancies in evaluator incompetence rates, including referral sources (i.e. inpatient versus outpatient), system characteristics (for example, if one correctional facility has particularly strong mental health services, their defendants might be found competent more frequently due to better treatment), and professional discipline. Various issues are raised in the discussion section of this article to further theorize upon the variance issue.
Overall, I found this article to be well considered in the questions it raised. On a macro level, the article points to the ongoing issue of variance, even within an area of mental health that practitioners want accuracy. As the authors point out, the Court's decision regarding a defendant's competence should not be simply the result of which evaluator the defendant is assigned. More individually, the article reminds clinicians they ought to be mindful of any unusual patterns they develop in the course of their work, with the first step being self-awareness. That is, if an individual clinician is arriving at a pattern of conclusions outside of the norm, then he or she ought to consider why. If the explanation is reasonable, fine. However, a lack of awareness can often lead to, at the very least, a blind spot with respect to one’s clinical work. In this case, if an evaluator has conducted 100 competency evaluations, and has found 50 of the defendants incompetent, there is a legitimate question to answer, given that 20-30% is the generally accepted norm. Is the reason based on particular circumstances? It could be; maybe you only evaluate individuals who have a particular (and significantly debilitating) condition. If not, why are your findings outside of the norm? A reasonable question to periodically ask, regardless of one’s profession.