Patient Experience of Care: Measuring and Incentivizing High Performance

There is a growing recognition within the medical-industrial complex that the patient is a key element of the enterprise, and that patient satisfaction, patient experience, patient engagement, patient activation, patient-centeredness are very important.  Some research shows that patient activation yields better patient outcomes, and that patient activation can be measured.

Patient-centeredness and patient engagement are two of the key metrics to be used by the federales in describing Accountable Care Organizations (ACOs), if the internecine battles within government are resolved soon enough to actually release draft ACO regulations in time to allow for sufficient advance planning for the January 2012 go-live date.  (See the response to CMS regarding the RFI on these metrics submitted on behalf of the Society for Participatory Medicine.)  These measures go into the Meaningful Use hopper as well, as Meaningful Use Stage 2 metrics are being reviewed.

In recent years, the federales have been measuring patient experience using the CAHPS surveys, and — coming soon to a provider bank account near you — there will be Medicare dollars tied to the scores on these questionnaires, not just dollars tied to the act of reporting scores.

As this emphasis on patient experience is unfolding, the Leapfrog Group is adding its voice to the chorus.  (The Leapfrog Group “promotes improvements in the safety of health care by giving consumers data to make more informed hospital choices.”)  I spoke recently with CEO Leah Binder (link will take you to a podcast interview with her from the HealthBlawg archives) and Hospital Survey Director Matt Austin about the new patient experience measures they are adding to their 2011 hospital survey.  In keeping with past practice, they will be asking hospitals to report three CAHPS measures (rather than asking folks to collect and report new measures).  The three were selected as being representative of a hospital’s broader performance with respect to patient experience, and also because hospital performance on these measures is all over the map.  (If everyone excels on a given measure, you can’t really use it to differentiate among providers; consider what happens to hospitals participating in projects like the Premier/CMS demos: Over time, more and more observed (and incentivized) behavior creeps towards the top quintile.  That’s a good thing, but you can’t keep rewarding folks for hanging out in the top rank.  If everyone is above average, as the children are in Lake Wobegone, nobody deserves props; it’s time to focus on improving another metric.)

The three measures are:

  • Pain Management: “Patients who reported that their pain was ‘Always’ well controlled.”
  • Communication about Medicines: “Patients who reported that staff ‘Always’ explained about medicines before giving it to them.”
  • Discharge Information: “Patients at each hospital who reported that YES, they were given information about what to do during their recovery at home.”

Once the survey data is reported in by hospitals (April-June) the Leapfrog Group will score hospitals’ performance, but only aggregate scores on these new measures will be reported publicly in year 1.  (Hospital-specific data on these measures will be reported back only to the hospital reporting the data.)

This seems to me to be a reasonable step forward.  In an era when many health care providers are focused on improving patient experience by providing WiFi service in waiting areas and hiring managers from the hotel industry, it is important to focus on core areas of hospital performance.  The question remains: is the CAHPS survey the best way of getting a valid assessment of this performance?   The next step may be to pilot a survey that queries patients at random times during a hospitalization via cell phone about measures like pain management.  The theory is that on-the-spot assessments are likely to be more accurate than CAHPS survey responses provided after the fact.

One key principle of the whole culture of measuring and incentivizing is that we ought to be focusing on outcome measures rather than process measures.  Why?  Because we want to improves outcomes, and it is not always 100% clear which processes yield which outcomes.  The CAHPS measures settled on by the Leapfrog Group seem to be a mix of process and outcome measures.  In a system with literally thousands of measures being used, the challenge is to be able to focus on a handful that will be truly predictive of, or that truly define, high-quality health care.

By continuing to identify measures that prove to be valid differentiators among hospitals, the Leapfrog Group provides a valuable service in differentiating among health care providers.  Focus on patient experience is a welcome addition to the analytics.

David Harlow is a health care lawyer and consultant. He serves as the Chair of the Society for Participatory Medicine’s Public Policy Committee. His “home blog” is HealthBlawg, where a version of this post first appeared.  You should follow him on Twitter.

Print

Posted in: key people | policy issues | reforming hc | trends & principles | Why PM

 

 

Comments

One Response to “Patient Experience of Care: Measuring and Incentivizing High Performance”

  1. […] In a nut shell if stuff is free we care very little about its value to us. Overall the proposed systems increases costs due to market pressures on insurance carriers and decreases consumer involvement in purchasing health services. The reason why such system would be proposed by a lot of very smart people is that they are missing critical pieces of information. Patient Experience of Care: Measuring and Incentivizing High Performance–David Harlow […]

Leave a Reply