Why It Matters
Because health and health care need improvement.
Processing ...

Patient-Centered Medical Homes: A Response to the Latest Research

By John Gauthier | Wednesday, March 26, 2014
On February 26, JAMA published the results of a large pilot study of patient-centered medical homes (PCMH). The study found no cost reduction and only modest improvement in quality over usual care. We asked Don Goldmann, MD, IHI’s Chief Medical and Scientific Officer, about his take on the study and its findings – and what it tells us about the promise of patient-centered medical homes. One of the authors of the study and Dr. Goldmann will be among the featured guests discussing what sort of research scrutiny is needed of PCMHs on the May 22, 2014 WIHI.

Q: What is your assessment of the pilot study and the evaluation?

DG: The authors include some of the preeminent health services researchers in the country. They are very experienced evaluators who have done considerable previous work to try to understand the impact of patient-centered medical homes. It is important to note that this study is almost exclusively a quantitative evaluation. The quantitative analysis was sound, incorporating features such as propensity scoring and sensitivity testing to adjust for some of the perils of a matched comparison group study such as this. The study took advantage of a natural experiment, and the design was pragmatic in nature. 

But apart from a survey with a relatively low response rate, this evaluation did not include a deep qualitative look at why improvement was so limited. What might we have learned from understanding the context in which participants attempted to improve? What barriers and external pressures did they encounter? While overall the aggregate results were disappointing, did any of the practices excel, and if so, how did they succeed? The authors of the study certainly are well versed in these methods, but apparently were not charged with performing a qualitative investigation, let alone real-time evaluation and feedback that would have allowed for learning and adjustment along the way.
In my opinion (which is shared by some my colleagues, such as Gareth Parry here at IHI and William Shrank, who was until recently at CMMI), an evaluation approach that emphasizes iterative learning is critical to the success of demonstration projects such as this. My understanding is that a companion qualitative study, using a so-called “positive deviance” framework, was performed, and may answer some of the questions we all are asking about context, whether or not some practices performed well, and if so, how they did it.

Goldman 1
Dr. Don Goldmann, Chief Medical and Scientific Officer at the Institute for Healthcare Improvement

Q: What about the study’s recommendations?

DG: The commentary is balanced and fair. The main points – that focusing on high-cost patients with multiple comorbidities and using recent technology affordances might be the logical next step for PCMHs – is reasonable. However, these assumptions clearly will need to be tested, as logical as they may be. One of the first things I learned from Don Berwick was not to pull a solution out of my pocket and deploy it at scale. But I think we can all agree that to really make progress, we will have to work across the continuum of care to improve coordination and support for patients and families. Unnecessary emergency department visits and hospitalizations are not going to be prevented either by a hospital or a primary care physician working in isolation. This was a major learning during IHI’s STAAR initiative to reduce readmissions. And we can all agree that we are much more likely to develop comprehensive solutions if we pay close attention to the circumstances and preferences of patients and families and track patient-reported outcomes.

Q: What’s your take on the measures used in the study? 

DG: As is almost always the case in studies such as this, a limited number of quality measures were included (mainly HEDIS measures). This remains troubling because patient care is so much more complex and rich than is reflected in these measures. However, the tendency has been to “brute force” performance on these core measures, so if anything, one might have expected better progress in these areas. Of course, we cannot know completely to what extent the transformation intended in the intervention practices “bled” into the comparison practices, diluting the apparent impact of the PCMH, but I doubt if this was a major factor since neither intervention practices nor the comparators achieved meaningful improvement overall. As I noted, I would have liked to see more attention paid to patient-reported outcomes, including functional status and quality of life. And while it’s interesting to know how the program impacted the reimbursement of physicians, it would have been equally interesting to see how it affected per capita costs of care and costs borne by the patient.

Q: What surprised you most about the study’s findings?

DG: What is stunning to me is not the p values and the general lack of statistical difference between the groups, but the relatively modest improvement in the non-ceilinged measures (those measures that already were at a high performance level at the start of the study) overall. I would have expected that the intervention practices would have had much more improvement, given the support they received: P4P, a Breakthrough Series Collaborative, coaches from the IPIP [Improving Performance in Practice] (arguably the best national example of extension service coaching, with a terrific track record in the early days of RWJF funding), registry function, etc. What strikes me is how such comprehensive assistance and incentives failed to accelerate improvement in virtually any measure. And of course, there was no improvement in ED use, ambulatory-sensitive hospitalizations, etc., in three full years. Of course, some will argue that three years are insufficient to accomplish transformation and improvement, but frankly, I would have expected improvement in the process measures tracked in this study, given the amount of technical assistance participating practices allegedly received.

Q: What else might explain the study’s failure to detect significant improvement? 

DG:  In order to really understand the findings about PCMH, I’d like to know more about a number of issues:

  • Although the Breakthrough Series (BTS) Collaborative is referenced, we have no idea if the spirit or letter of the BTS process was followed. In my experience, projects frequently say they used this method, but when you dig deeply, commitment to the process, coaching, use of data for improvement, and the feedback and learning system were “light touch.” I would like to see papers such as this one provide such details, even in an electronic appendix. I have the same concerns about how IPIP was applied. We know that technical assistance using “extension service” principles can be effective, but it’s not a box you check – it’s hard work by trained people.
  • Perhaps the greatest peril to the program’s success in improving care was the apparently unrelenting focus on NCQA accreditation as a patient-centered medical home. As noted by the authors, focusing on NCQA accreditation can be a distraction and encourage a checklist mentality. We have seen this before in other work designed to transform primary care. If the financial incentives were closely tied to meeting NCQA standards, rather than concrete improvements in patient care and outcomes, the money was focused on the wrong endpoint. I am not saying that the NCQA standards are not important; they were developed with great care by some of the nation’s leading experts. But they must not be an end in-and-of-themselves. Empanelling patients, utilizing real-time data to guide improvement, establishing effective inter-professional teams, supporting care beyond the primary care office in concert with care managers and community health workers and services, superb communication among members of the care team and with patients and families – these and other components of an effective medical home are tough to implement and sustain. That’s the real work, not checking the boxes.
  • Patient-reported outcomes were not collected, and it is possible that these improved – this would be important to know, even if there wasn’t improvement. They should be an integral part of primary care practice, and it’s disappointing how infrequently they are determined and recorded.
  • Even in the short period since this study was performed, advanced medical home features, including use of HIT for self-management, “touches” apart from face to face visits, etc., as well as home monitoring have evolved. That said, evidence that these measures have improved care is scanty.
  • I could not find any data in the paper on heterogeneity. It’s a major disappointment not so see what we call “small multiples,” or time-ordered data for all of the primary care practices displayed in a way that shows clearly who (if anyone) hit a home run and who lagged behind. Aggregate data and group averages can hide key observations that would allow us to explore what worked where, why, and how. As I noted, I do believe that we will have at least some of this information from the companion evaluation performed by a different research group.
Q: So what should we do about PMCH as a result of this study’s findings?

DG: This is an important study that should lead to evaluations that are geared to identifying where exemplars are making progress and understanding what distinguishes those practices from the rest. It should generate a call for real-time, shared learning. I agree with the commentary that more attention to those patients at high risk might be more effective. Future efforts should have a greater emphasis on shared decision making, self-management support in the home and community, and other practices that we think are important in keeping people out of the ED and hospital, improving outcomes, and controlling costs. And it would have been valuable to know how underserved and minority populations fared in the course of this program, since it is quite likely that special efforts are needed for these patients. But the lack of improvement seen in this study, despite considerable financial incentives, is sobering no matter how you look at it.

Q: A study like this can be very discouraging to hard-working staff around the country who are in the midst of redesigning primary care practices to support patients better with their health as well as health care. News headlines are quick to suggest, “See, PCMH isn’t the solution, either.” What’s the counter or more nuanced messaging improvers might incorporate instead? 

DG: I’ve already mentioned a number of factors to consider for those who are interested in designing and implementing effective medical homes. And we definitely should not consider this study – even allowing for its limitations – in isolation of other work. IHI will be hosting a WIHI in collaboration with JAMA in May, during which we will discuss other programs that have demonstrated more encouraging results, including programs in populations that serve primary Medicaid patients. That said, we need to remember that addressing the Triple Aim will require more than a highly functioning medical home. It will require an appreciation of a person’s entire journey – health promotion, disease prevention, and treatment – in the circumstances and contexts in which they live and work. This will mean stronger partnerships among providers, health care systems, payers/insurers, employers, and, perhaps most importantly, communities. I often say that we need to be addressing six levels when we think about achieving the Triple Aim: individual people, their communities, clinical providers, microsystems (wards, clinics), larger health care delivery systems (including so-called “macro” and “meso” systems), and policy and payment. 

first last

Average Content Rating
(2 user)
Please login to rate or comment on this content.
User Comments