Always nice to get a little Alan Moore / Juvenal nod into the title if you can.
The Family Justice Council report on the quality of expert psychologists used in care proceedings (as trailed on Channel 4 news) is up .
You can find it at http://www.uclan.ac.uk/news/files/FINALVERSIONFEB2012.pdf
They looked at 126 reports from 3 courts, and used four independent assessors to judge the quality of the reports, both against the guidance of the CPR and a piece of American caselaw (which I have to confess was unfamiliar with me until today) giving guidance on the construction of expert reports and their own views as to the quality of the report. They found, as you may have heard, that :-
One fifth of instructed psychologists were not deemed qualified on the basis of their submitted Curriculum Vitae, even on the most basic of applied criteria. Only around one tenth of instructed experts maintained clinical practice external to the provision of expert witness work. Two thirds of the reports reviewed were rated as “poor” or “very poor”, with one third between good and excellent.
Without wishing to be unkind, my preliminary view is that they’d obviously got a particularly strong batch. I have found most psychological reports to be a blend of regurgitation of information already found elsewhere, a statement of the bleeding obvious, recommendations plucked from thin air and if you’re particularly lucky a hefty dose of God Complex thrown into the mix. [I would add, however, that if you get a really good psychological report, it sings, and makes the gulf in quality even more visible. I've got a few psychologists, who are always snowed under and have huge timescales, but always, without fail produce a report that adds something worthwhile to the process. Sadly, their numbers are dwarfed by the people who tell you very little, and take 160 pages to do it]
Here are some of the particular issues that the report considers have been problematic with psychological assessments : -
Research has identified a range of criticisms of psychological reports in general. These include occasions where:
Psychological evidence has been presented as scientific fact when in fact it is speculation and conjecture
There has been an absence of psychological theory;
Evidence has been provided concerning concepts which are not accepted within the field and have not been demonstrated empirically. At times this has had a negative
impact on the outcomes of proceedings (e.g. with one of the most heavily criticized concepts being that of „recovered memory‟)
There has been a failure to provide evidence which is outside the knowledge of the typical judge or juror
Psychometric evidence has been submitted as scientific fact when it does not meet the criteria for this (e.g. Daubert criterion). Rather the evidence has represented
specialised knowledge at most, with some submitted psychometric evidence based on research and not clinical assessment tools
An over-use of psychometrics, not all of which are applicable to the case being assessed. Over-use of jargon and speculation, with poor content and style and a
failure to include the data from where inferences are drawn
The credibility of the source has not been included, with no attempt made to evaluate the reliability and validity of the methods used to collect data
Psychological risk assessments have focused on first and second generation approaches (e.g. unstructured clinical and actuarial) as opposed to the more reliable
and valid third generation approaches (structured clinical, with or without actuarial anchoring)
Allegations have been reported as facts
Emotive terms have been applied where these could prejudice a decision
They found that 29% of the reports provided insufficient facts and moved ahead to a conclusion. That 22% had significant missing data but still expressed a conclusion.
To illustrate examples concerning missing data, these are as follows:
- Reports on more than one child which failed to include the data on all children but still cited an opinion on all the children;
- Reports drawing conclusions which have not been mentioned in the report, as noted by one reviewer: “Indicates in conclusion that any individuals assessing this
client should be knowledgeable of Aspergers type characteristics and the impactof this on parenting. This was never mentioned in the report, or assessed, and
appeared as the last sentence” [rater comment].
- Reports where opinions are presented where data was completely absent, i.e. “Comments on self-esteem, emotional loneliness, perspective taking, sexual risk,
but include no data” [rater comment].
- Reports where the data is completely missed, “Does not include fact section –goes straight to opinion” or “cites psychometrics but no scores” [rater comment].17
- Report citing opinion without conducting a formal assessment, “stated that client presented as being of average intelligence without deficits in comprehension or
expression, formal intelligence testing was not undertaken” [rater comment].
Further examples were: “he seemed, at times, to be quite a jumpy person with arousal levels higher than an average baseline. No assessment completed of this”
and “did not assess for personality and yet draws opinion on it”.
- Refers to the opinion of another as their opinion, “Refers to someone else‟s report in response to an instructed question” [rater comment].
They then considered the conclusions against the main body of the report (a particular bugbear of mine, since if you can’t tell why the conclusions have been reached, how is any professional supposed to explain to their respective client why the expert is with them or against them, and whether they should shift their own position?)
Specific background missing/unclear (1). 34.0 %
Limited opinion (2). 17.0 %
Opinion confused or not clearly explained (3) 17.0 %
No background, just opinion (4) 9.4 %
Some opinions, not linked to factors (4) 9.4 %
Opinions not substantiated (6) 7.5 %
Questions not answered (7). 3.8 %
No opinion (8). 1.9 %
Okay, the “no opinion” at all has a pretty low score, but that probably still represents from that pool five families who waited for three or four months for a psychologist to help decisions about their future to be made and who got nothing more than an expensive Scooby Doo report (shrug of shoulders, “I-dunno”)
They found that 60% of the reports had missed the requirements of the CPR for an expert report.
They give some examples of the expert straying into areas reserved for the Judge (I point this out, because in general I agree with the report, but I think the example given here is quite badly flawed and rather weakens some of the other criticisms – “I am of the view that these children have all suffered significant harm” – the ultimate decision on that is of course for the Judge, but there are many, many times when an opinion from the expert as to that is helpful, and generally it is provided as an answer to one of the questions. That, I think highlights the difference between the reports commissioned under the CPR for civil matters and for children matters – the expert is there to help the Court with specialised expertise rather than as a ‘gun for hire’ as happens/happened in civil cases. )
But the report isn’t just a woe-is-me hatchet job, it does go on to make some recommendations. They are worth reading in full, but these are the ones that I considered to be very important
That instruction of experts should be restricted to those currently engaged in practice which is not solely limited to the provision of court reports. Only
approximately one tenth of the instructed experts were engaged in practice outside of court work. This is not in keeping with the expectation of an “expert” as a
senior professional engaged in current practice, suggesting that courts are accessing those whose profession is now solely as an “expert witness”. There
should be an expectation that psychologists providing court reports should continue to hold contracts with relevant health, government or educational bodies
(e.g. NHS, Private Health, Prison Service, Local Authority etc) or demonstrate continued practice within the areas that they are assessing (e.g. treatment
provision). This is a means of ensuring they remain up to date in their practice, are engaging in work other than assessment, and are receiving supervision for
their wider work as psychologists. Connected to this, courts should be wary of experts claiming to complete excessive amounts of independent expert work.
That the instruction is clearly for the expert to conduct all aspects of the work and not graduate psychologists or assistants. Such individuals are not qualified with
the term „graduate psychologist‟ used to describe those who have completed approximately one third of the required training (e.g. an undergraduate degree in
psychology and nothing more). There was evidence of their over-use by experts,who were relying on them in some instances to review collateral information and
interview clients. Courts should only be paying for the expert witness to complete all aspects of the report
Care should be taken with the use of psychometrics and these should not unduly influence final judgments. The current research indicated a wide range of such
assessments being used and not all relevant or up to date. If tests are utilised then experts should be providing courts with sufficient information to allow them to
judge their quality. Using the Daubert criteria as a reference for this would assist with the quality of this information (e.g. provision of error rates, evidence of the
theory or method the test was based on), and assist courts to judge how it should be admitted as evidence.
A need for psychologists to provide provisional opinion and alternative opinions.
The data from which opinions are drawn needs to be clearly indicated to the court.
The use of tested and/or generally accepted psychological theory to support core findings. Courts are paying for psychological assessments and this should be
evidenced to distinguish the opinions from those provided by other disciplines
(Hallelujah to that last one.)
The report doesn’t really get into the other side of the coin, which is – are we asking psychologists routinely to assess parents when it is not the right sort of assessment? When I started, psychological assessments were confined to cases where there was some unusual feature or behaviour and the professionals simply couldn’t understand fully and called in a psychologist to advise on that aspect (I would add that the professionals at that time would have generally been a social worker very skilled and experienced at assessing families rather than a ‘commissioner of assessments’ and an old-school guardian whose role was to dig into the LA work with the family and see if things ought to have been, or could have been, done differently). Now, a psychological assessment is routinely considered in neglect cases, where common sense tells everyone concerned that the problems are either motivation, lack of comprehension of what is needed to run a family in a non-chaotic way, or exposure as a child to poor parenting and thus no internal models of how to parent.
We go to psychologists when a social work assessment is what is needed. It is one of my main bugbears with both the Family Justice Review and the LSC cost-caps, that the ISW reports which are independent, swift, cost-effective and actually genuinely informative are sneered at and undermined and costs slashed to the point of extinction, whereas the bloated and we see often of varied benefit escape that exercise.
Rant over !