Why It Matters
Dramatic large-scale results often raise questions from well-meaning colleagues about the credibility and validity of the data sources.
Processing ...

Data Use in Large-Scale Improvement Projects: The Experience of Project Fives Alive!

By Sodzi Sodzi-Tettey | Friday, April 17, 2015

In March 2015, I presented aspects of Project Fives Alive!’s work at the Consortium of Universities in Global Health meeting in Boston. A curious colleague inquired how, given our reliance on routine national data systems, we could vouch for the quality of our data. 

The decision whether to rely on routine national data sets or create special parallel data sets is one that will continue to plague implementers of large-scale projects for a long time to come. In reality, the two options represent different sides of the same coin, with advantages and pitfalls in equal measure. 

Fortunately, Ghana’s Project Fives Alive! (PFA!) has experienced both sides of the data coin. For almost eight years, PFA! has rapidly scaled up a national Maternal, Newborn and Child Health (MNCH) project using a quality improvement (QI) approach. From 25 sub-districts in 2008, the Project has cumulatively worked in 544 sub-districts as of 2015. PFA! has also cumulatively worked in 202 regional and district hospitals, representing almost 80% of public hospitals in Ghana. In all these sites, the Project has worked with more than 700 multidisciplinary QI teams formed by managers to test local solutions for improving the quality and reliability of care for children under five. 

The impact? A 31% reduction in under-5 mortality in 134 hospitals, 37% reduction in post-neonatal infant mortality in 134 hospitals, and 35% reduction in under-5 malaria case fatality in 134 hospitals as of November 2014. Presenting such The impact? A 31% reduction in under-5 mortality in 134 hospitals, 37% reduction in post-neonatal infant mortality in 134 hospitals, and 35% reduction in under-5 malaria case fatality in 134 hospitals as of November 2014. Presenting such impressive results at large scale often raises legitimate questions from well-meaning colleagues about the credibility and validity of the data sources.  

Perhaps it is best to start from the very beginning in 2008 when, from three districts in Northern Ghana, the Project and its evaluation teams originally collected parallel data sets to validate its intervention, including some processes being tracked with non-routine data. Two main issues emerged. First, at full scale we would have neither the time nor the resources to continue collecting data in this tedious manner, especially given the Project’s strategy of continuously monitoring data over time. Second, and perhaps more seriously, the Project ran into its first sceptic – a national officer who could not understand why and how process improvements allegedly being recorded and reported by the Project did not find expression in the routine data system. Of course, we do know of the lag between lower-level processes and their system-level outcomes that one often sees in such work. 

In 2009, upon rapidly scaling up high-impact interventions to all of the 38 districts in the three regions of the North, Project Fives Alive! commenced direct usage of the routine data system – the District Health Information System (DHIMS). All the right reasons were adduced for this decision – alignment, promoting use of local data for decision making, strengthening national data systems, etc. It was, however, not as smooth sailing as it sounds. Internally and in many other fora, legitimate questions arose about the timeliness, completeness, and accuracy of the data reported in the routine system. 

Having decided to strengthen the national data system rather than create a parallel one, the Project moved to the logical next step of writing and implementing a protocol on improving the quality of the routine data system. To improve accuracy, for example, our monitoring and evaluation officers, working in partnership with trained health information officers, compared source data in facilities to data reported into DHIMS and worked to close the gaps. Faced with even more facilities at national scale, we had no option other than to team up with other national projects equally interested in relying on and improving the quality of the routine data system. Under the leadership of the Monitoring and Evaluation unit of the Ghana Health Service, therefore, PFA!, MalariaCare, and the National Malaria Control Program have, since 2013, rolled out an adapted protocol for continuously improving the timeliness, completeness, and accuracy of the routine data system. There is data to show how data quality gaps are continuously being closed by national, regional, and district health information officers and biostatisticians. 

Even so, questions about data reliability remain, leading to one senior researcher in Ghana commending PFA!’s reliance on DHIMS, but also suggesting that such large-scale projects determine and state clearly the margin of error on the data being reported. 

Perhaps, as suggestions go, improvement scientists could use another. I am familiar with improvement scientists being so focused on improving processes and outcomes in various care pathways that they sometimes neglect critical questions about data validity that will come back to haunt them once breakthrough results are reported. Often, we QI practitioners say, running the next Plan-Do-Study-Act (PDSA) improvement cycle is not an experiment – collect just enough data to tell us if the changes being tested are leading to improvement. In my experience, this works for exceedingly small-scale projects. However, for large-scale improvement work, or at the time when the project plans to publish its work in peer-reviewed journals, it is then and only then that critical questions about sample size used, extent of randomization, indicator definition, etc., all emerge very strongly. At this point, what might have started as an exciting adaptive process starts being measured against rigorous traditional research. My suggestion: prepare for the day of reckoning from day one! 

This leaves us the option of independent evaluation of one’s work. Within Project Fives Alive!, this has been done through periodic surveys and in-depth analysis conducted by the University of North Carolina at Chapel Hill and the University of Ghana, ISSER. Finally, if one is as lucky as PFA!, the beginning and end of a large-scale project may coincide with an independent national survey like the Ghana Demographic Health Survey conducted by the government of Ghana. 

In April 2015, seven years after the country’s last Demographic Health Survey (DHS 2008), which coincided with the start of PFA!, Ghana again released the latest DHS results, coinciding with the end of the Project. Given the Project’s overall aim to assist and accelerate Ghana’s efforts to achieve Millenium Development Goal Four (MDG 4), we keenly awaited these results. The new DHS shows under-5 mortality in Ghana reducing from 80 to 60 per 1,000 live births, child mortality (children surviving to age 12 months) reducing from 31 to 19 per 1,000 live births, infant mortality reducing from 50 to 41 per 1,000 live births, and neonatal mortality reducing from 33 to 29 per 1,000 live births. We could not help noticing that at the time DHS was conducted and now recording a 25% drop, we ourselves were reporting a 28% drop in under-5 deaths. We greatly rejoiced, knowing that PFA! has contributed, in some modest part, to these improvements. 

Sodzi Sodzi-Tettey, MD, MPH, is the Institute for Healthcare Improvement’s Senior Technical Director, Africa Region, and Director of Project Fives Alive! in Ghana.

first last

Average Content Rating
(0 user)
Please login to rate or comment on this content.
User Comments

© 2023 Institute for Healthcare Improvement. All rights reserved.