The Australian Medical Association Limited and state AMA entities comply with the Privacy Act 1988. Please refer to the AMA Privacy Policy to understand our commitment to you and information on how we store and protect your data.

×

Search

×

Time for a Measured Response?

Last month I was able to attend a conference co-hosted by the Australian Healthcare and Hospitals Association, the Australian Council on Healthcare Standards (ACHS) and Women’s and Children’s Healthcare Australia. An excellent meeting it was, too, with some outstanding presentations on the theme, ‘Measurement: redefining health’s boundaries?’ Afterwards, though, when there was time to reflect on the breadth and burden of measurement and reporting in our health system, it was hard not to wonder if there can be too much of a good thing.

03 Dec 2012

Last month I was able to attend a conference co-hosted by the Australian Healthcare and Hospitals Association, the Australian Council on Healthcare Standards (ACHS) and Women’s and Children’s Healthcare Australia. An excellent meeting it was, too, with some outstanding presentations on the theme, ‘Measurement: redefining health’s boundaries?’

Afterwards, though, when there was time to reflect on the breadth and burden of measurement and reporting in our health system, it was hard not to wonder if there can be too much of a good thing.

The aims are laudable: to improve the quality of health care, to monitor the performance of individuals, departments, hospitals and health systems, and to assist consumers in making informed choices.

All this requires accurate, validated and useful measures of performance.

Even describing what is meant by ‘performance’ means different things to different people.

Ideally, it would relate to an integrated measure of patient-centred outcomes and experience of the health system.

This information can be time consuming and expensive to collect routinely.

Collecting a statistically valid sample of important end points is challenging, and more likely to occur as a limited ‘one off’ audit than as part of routine practice.

The temptation, then, is to make the easily measureable important, without questioning its value as a ‘performance’ measure, rather than focussing on making a more limited range of validated measures important.

Almost every organisation currently involved in health care has, quite rightly, a safety and quality agenda, but this has resulted in considerable duplication and overlap.

The Australian Commission on Safety and Quality in Healthcare and the National Health Performance Authority (NHPA) have obvious national roles, as do accreditation bodies such as ACHS, with their suite of performance indicators for which there is broad input from the medical colleges and societies.

However, those who fund health care, such as the Independent Hospital Pricing Authority and private health insurers, are also getting into this space.

Then there are State and Territory safety and quality frameworks and reporting requirements, and specialty specific registries against which to benchmark performance for individual hospital departments.

For individuals, the performance frameworks developed by the Royal Australasian College of Physicians and Royal Australasian College of Surgeons are well thought-out to address the spectrum of work undertaken by 21st century physicians and surgeons.

The vast majority of this work is excellent.

But the work of compliance, which seems to be ever increasing, is equally vast.

One of the most pleasing aspects of the conference was the recognition from health administrators that measurement and reporting are not ends in themselves, but need to result in improved quality, effectiveness, or to stimulate innovation.

Just as pleasing was the recognition that some of the most important aspects of a patient’s experience of health care delivery are difficult or impossible to measure.

It would be great to see this information from senior health administrators and academics flowing into the safety, quality and measurement sub-industry, so limited resources can be focussed on clinically meaningful service improvement.

Websites such as MyHospitals (www.myhospitals.gov.au), coordinated by the Australian Institute of Health and Welfare under contract to the NHPA, have been designed to put more information on both public and private hospitals into the public domain in an accessible and easy to use format.

As is common with aggregated information, the site suffers from a significant time lag. For example, currently available elective surgery and emergency department waiting time data are for the 12 months to June 2011.

In these days of instant electronic communication many other opportunities exist for people to share their health care experiences, such as through Facebook, Twitter and myriad other websites.

Some, such as www.patientopinion.org.au, are moderated and make it clear that the reports are from individuals recounting their experiences, and are sent to the relevant service providers. But others do not appear to be moderated, and there is a resultant ‘free for all’ without the discipline of verification.

Nevertheless, it seems many people trust online reviews. A recent Nielsen survey found 71 per cent of Australians trust them.

In the hotel industry there are reports of both ‘generated’ positive reviews and negative reviews of competitors on websites such as Expedia and TripAdvisor.

I am not aware of any similar examples in health care, but it highlights the need to treat self-reported information with caution.

The bottom line here is that there is a huge amount of information on safety and performance in health care, much of it of high quality.

But there is an opportunity cost in all this measurement, and a review of the many programs, the amount of duplication, and its cost effectiveness seems overdue.  


Published: 03 Dec 2012