Feature / Room for improvement

31 October 2014 Bevin Manoy

Login to access this content

Click here to download a PDF version

Image removed.Data quality continues to present a challenge to mental health trusts. Costing of the newly established cluster currency, and the activity
data that underpins it, needs improving before it can be used consistently as the basis for a national payment and pricing system
for the sector.

Healthcare intelligence service CHKS reviewed 25 trusts in 2013/14
to help identify areas where the trusts can improve their costing processes and the underpinning data. We asked for volunteer trusts, excluding the nine trusts that had been reviewed in 2012/13 as part of the previous year’s payment by results data assurance framework.

In particular, we looked at the processes in place to support the 2012/13 reference cost submission, from board level down to the appropriateness of cost allocations used to determine cluster costs.

We also reviewed the accuracy of data that underpins the care clustering decision and the key data items that can impact on the cost and price of cluster. In total, we looked at 1,852 care cluster events, split evenly across the three superclusters – non-psychotic, pyschosis and organic – reviewing the accuracy of the care cluster and the start date and end date.

 

Costing requirements

All of the trusts reviewed were able to produce a cluster cost for each of the care clusters for their organisation. Most of the trusts complied with the cluster costing guidance and understood the requirements. Trusts with more accurate cluster costs tended to take a more granular approach, characterised by the more bottom-up care pathways and packages project (CPPP) approach, as opposed to a straightforward top-down approach. As with acute hospital trusts (subject to a separate audit, see Healthcare Finance June 2014, page 15), mental health trusts with good costing information and effective systems all had good support from senior staff in checking cost calculations, robust sign-off processes in place and a project plan for completing the submission.

These trusts had a consistent approach for costing care clusters across the organisation and, while not always documented, there was evidence in working papers of how they were costing care clusters. But we found weaknesses in mental health trusts’ approach to auditing costing processes or systems and benchmarking cost data.

The National Benchmarker (www.nationalbenchmarker.co.uk) is a useful resource for identifying cost variance in care clusters; more trusts should make use of this. Encouragingly, more than half the trusts were starting to use cost pools in line with the non-mandatory HFMA clinical costing standards. These cost pools provide a consistent way of aggregating costs and should support more detailed benchmarking in future.

Engaged and informed board members drive organisations to cost better. This also helps engage clinicians in improving activity data and cost information. However, engagement in cluster costing is not yet well developed, with most costing carried out by the finance teams and little or no information shared outside the team.

However, there were some good examples of clinical engagement. One trust validated the underlying data using a comprehensive yearly process of sense-checking, verification and face-to-face meetings with clinicians and managers at team level. This was done primarily for the purpose of assuring the information for reference costs rather than for routine reporting.

One of the main barriers to clinical engagement in costing was the lack of granular cluster costing data. Most trusts do not use patient-level information and costing systems (PLICS) to calculate costs, but use the top-down apportionment approach. This meant data was aggregated at trust level, so only one cluster cost per trust could be determined. 

Without being able to calculate cluster costs at team or service level, it is challenging to present meaningful cost information to clinicians delivering services. Trusts cannot then expect to engage in validating and challenging cost data and the assumptions underpinning it. 

 

Cluster data quality

Getting the data right is critical to accurate costs. If trusts are using PLICS systems or aggregating data, they need to have patients accurately assigned to clusters and know that the length of time spent in a cluster is correct. To check the accuracy of clusters, we worked with clinicians at each trust and reviewed the patient record to see if the evidence in the patient record supported the clinical decision to allocate a service user to a care cluster.

We also checked that the dates the service user started and ended the care cluster were correct. Validating this information gives mental health trusts assurance that the data they use to calculate care cluster costs is accurate. The accuracy of clustering new service users was better than that of cluster transitions, where a service user is moved from one cluster to another as the result of a review.

Among the best performing trusts, the patient records were clearly written and contained evidence to support the cluster decision. The patient’s mental state assessment was clearly documented. The notes gave detailed information about the service user’s presentation and the interventions they were receiving. This made it clear why the clinician had clustered the patient.

 

Wrong allocation

Overall, we found 16% of service users had been allocated to the wrong cluster (see table 1). The two main causes of errors were the same as those found in the trusts that were reviewed in 2012/13:

The patient record was not an accurate reflection of the patient’s mental state and presentation. It often lacked a record of a good mental state examination or was poorly written and not comprehensive. In these cases, the clinician may have made the correct cluster decision based on their knowledge of the patient or mental state examination, but the record-keeping was poor and did not justify the mental health clustering tool (MHCT) scoring and clustering decision.

Clinicians were not following the MHCT guidance, effectively causing them to cluster patients incorrectly.

Often clinicians allocated service users to the wrong care clusters, because they did not review mental state assessments of users that had been carried out in the two weeks leading up to clustering in line with the MHCT guidance. Instead, they clustered only on the current presentation they assessed. We found that training on transition protocols was limited and many trusts had been focused on getting clustering correct on admission or focused on clustering existing service users for the first time. When we reviewed the accuracy of service users moving to a new cluster, having already been admitted to service, we found that 21.5% of service users were incorrectly stepped down or up. In these cases, clinicians did not follow the transition protocols in the MHCT.

The most common error was caused by mental health clinicians re-clustering a service user into a less resource-intensive cluster because the service user’s presentation had improved in the past few weeks or months.

 

Table 1: Summary of cluster data errors



New care cluster errors %

Transition care cluster errors %

Combined figure %

2012/13 (9 trusts)

40.0

N/A

40.0

2013/14 (25 trusts)

25.7

37.4

31.2

Breakdown of 2013/14

 errors

Super cluster or care cluster wrong

11.2

21.5

16.1

Days in cluster wrong

7.9

4.0

6.1

Unsafe to audit

6.5

11.9

9.1



Common error

As greater use is placed on the data that underpins clustering for contracting, costing and currency development, the accuracy of the number of days a service user starts and finishes a cluster becomes increasingly important. There were errors at most trusts that led to inaccurate data recording on date of entry to cluster, date of change of cluster or discharge from service. We found 7.9% of new care clusters and 4% of transitions had the wrong start or end date – considerably better than the 27% error rate we found in 2012/13.

There were eight trusts that had no errors in the start or end date. These trust had good processes in place for managers to check that staff were clustering service users in a timely manner. This included good performance management tools that showed when service users had started care clusters and the length of time in care clusters.

Some trusts were still not able to, or would not record, the time spent in initial assessment compared with time in a care cluster. Costing teams had to apply local business rules to data to estimate initial assessment costs such as counting the first two contacts and first two inpatient bed days as initial assessments. While this provides an adequate estimate of the costs of assessments, it does not provide the granularity needed to differentiate the variable costs of assessments between care clusters. 

Our findings show that there are issues within cluster costing and the activity data that underpins it. This means the costing data submitted nationally may not be robust enough to be used consistently as the basis for a national payment and pricing system. An effective payment system will depend on the care clusters accurately reflecting needs. Care clusters must link patients to packages of care so that care cluster allocation meaningfully reflects patient needs and the interventions they receive. These then need to be costed accurately so that local or national tariffs can be determined reliably. 

 

Bevin Manoy is associate director, CHKS Coding and Financial Assurance

 

Image removed.Good costing but focus on clustering

Tees, Esk and Wear Valleys NHS Foundation Trust was one of the 25 trusts to volunteer for the data assurance audit, writes Steve Brown. The overall assessment of its reference costs submission was ‘good’, with only immaterial errors to its quantum. The trust is also one of just a few trusts in the sample using the more granular bottom-up care pathways and packages project (CPPP) relative value unit approach to costing.

Despite the overall thumbs up for costing, three of four main recommendations called on the trust to document its costing approach and to improve and support the scrutiny process. However, there was a higher than average error rate (38%) within the small sample of 68 patient cluster events (compared with just over 31% across all 25 trusts), provoking a further recommendation to improve support for staff using the mental health clustering tool. The main cause of error was a ‘failure to consider the detail of cluster rules when a cluster was chosen’.

Drew Kendall, trust associate director of finance, says that costing and clustering are still developing within the mental health sector. He adds that the costing audit process, which had inevitably been based on acute sector audits and was also in its early days, had helped to highlight areas the trust had already been keen to develop.

‘We see four key challenges in taking clustering and costing forward,’ he says. ‘The issue of data quality is an ongoing one for the whole mental health sector. Our data is good, but it could be better. And we need to encourage better conversations internally on the back of the data to support improvements in service and efficiency. We also need to develop greater links to the developing quality metrics.’ Mr Kendall believes the fourth challenge is for the whole sector to improve ‘stability on activity plans’.

He says cost comparisons across organisations will only be reliable if trusts are actively managing their caseload. If service users are being left on the caseload when they could or should have been discharged, this will distort costs per cluster day as the activity count will be higher.

The trust has already responded to the recommendations. Support is provided for clustering clinicians, with trainers for the three localities, and specific costing recommendations have been delivered. The trust was praised for its ‘good’ clinical engagement and its clinically led payment by results programme was highlighted.