My friend Dale Larson and his colleague William T. Hoyt have debunked the common notion that grief counseling often makes things worse rather than better in a new review of the scientific literature reported in the August issue of Professional Psychology: Research and Practice, published by the American Psychological Association (APA). You can read both the APA press release about it, and the full text of the article.
There is a lot of fuzzy thinking in this field, and this study is nice reminder that evidence matters. The flap is about “TIDE” (no, not the laundry product). In this context TIDE refers to treatment-induced deterioration effects. In other words, does seeing a grief therapist make things worse?
According to Larson and Hoyt, "A pessimistic view of grief counseling has emerged over the last 7 years, exemplified by R. A. Neimeyer’s (2000:541) oft-cited claim that 'such interventions are typically ineffective, and perhaps even deleterious, at least for persons experiencing a normal bereavement'". Neimeyer reported the alarming finding that, averaging over all studies that provided the necessary TIDE information, "nearly 38% of recipients of grief counseling theoretically would have fared better if assigned to the no-treatment condition". The results for “normal” grievers (as opposed to those who were "traumatically bereaved"), were even more alarming, with nearly one in two clients having negative effects as a result of treatment.
If this is true, having a cup of herbal tea would be cheaper and safer than seeing your grief counselor. These findings were quickly accepted as gospel fact, and were disseminated with lemming-like enthusiasm in the professional literature. Unfortunately, no one bothered to check the data behind these claims until now.
The APA press release on the new Larson and Hoyt article sums it up by saying that "Despite frequent claims to the contrary, there is no empirical or statistical evidence to suggest that grief counseling is harmful to clients, or that clients who are normally bereaved are at special risk if they receive grief counseling.... (Larson and Hoyt)... found that the data on which these negative views are based have never been published and came from a student dissertation that was never peer-reviewed, using a statistical technique attributed to another student’s master’s thesis, also never peer-reviewed."
Assuming that the analysis by Larson and Hoyt is correct, how is it possible that the dubious 2000 findings could have risen so quickly to be a popular mantra about grief support? It is not enough to simply criticize the sloppy initial publication. Why did it take this long for someone to notice problems with the work? The damage caused by the publication and repeated citation of the bogus TIDE claims is a case study of how the ideal of careful peer review broke down in this social science field.
Only recently, a separate post hoc blind review of the statistical methods was done by the APA, long after the widespread negative impact of the alarming findings had taken hold. The conclusion was that the methods were unreliable and that the findings cited by Neimeyer are "seriously flawed" (Larson and Hoyt: 349).
So how did this junk get past the original peer reviewers in 2000? Apparently none of those reviewers examined the original dissertation research. Instead, they trusted Neimeyer's summary of the dissertation findings. Once Neimeyer's summary was in the literature no one ever returned to the original empirical report to challenge its validity. This is like a hapless literature student who relies on the CliffsNotes for Moby Dick since it is an easier read than the original. That’s understandable, but I expect more from a peer reviewer.
Larson and Hoyt point out that "One factor that made the TIDE claims resistant to critical review was the practice of citing Neimeyer’s peer-reviewed summary article, rather than Fortner’s dissertation (which contained the data on which the claims were based).... The phrase 'novel statistical procedure,' which to a methodologist may be a red flag, appears in this instance to have leant added legitimacy to the findings among statistically naïve readers. If readers (and even reviewers!) lack the statistical training to evaluate new methodologies, this greatly enhances the chances for invalid findings, or misinterpretations of valid findings, to enter the published literature."
Larson and Hoyt (p. 353) suggest that scholars who don’t bother to read the sources they cite are "participating not in the conventions of scientific scholarship but rather in a sort of grown-up version of the children's game of 'telephone.' In this game, receivers later in the chain of communication never hear directly from the source but only from those just preceding them in the chain. In the grown-up version of the game, just as in the children’s version, there is every likelihood of serious distortion of the original message."
If you want to play telephone with Dale Larson you can probably reach him via his web site.