9b. Criteria for Evaluating Population Health Intervention Evidence
So, once you have found some “evidence” for interventions addressing your population health issue, how can you be certain that it’s good evidence for sound decision making? It has been estimated that less than 20% of published literature is scientifically sound, leaving public health practitioners with the daunting challenge of discriminating between high- and low-quality evidence.1
Three Key Questions
According to Rychetnik and her colleagues, the appraisal of evaluative research used in evidence-based public health should focus on three broad questions:2
- Is the research valid, sound, and applicable to my situation?
- What outcomes can I expect if I implement this research?
- Will my target population be able to use this research?
Let us review each of these questions, including the related implications for the appraisal of population health evidence, in greater detail.
1. Is the research valid, sound, and applicable to my situation?
A simpler way of phrasing this question would be “is the research good enough?”
To date, the assessment of evidence-based health information has mostly depended upon what is called the levels or hierarchy of evidence, which traditionally has been defined by the type of study design used to conduct the research. Study designs are graded by their potential to eliminate bias, factors other than the intervention with the potential to influence the results. A hierarchy of study designs was first suggested by Campbell and Stanley in 19633, and changing assumptions about the utility of various research designs have guided the development of subsequent hierarchical categories.
The “pyramid of evidence” portrayed in Figure 1 reflects current thinking about the preferred hierarchy of research designs in the field of medicine. Studies that are further up on the pyramid offer the potential of reduced bias, thereby enabling conclusions about cause-and-effect to be made with increased confidence.
Figure 1: The Pyramid of Evidence for Research Designs
Croswell, J.M, Kramer, B.S. (2009, March). Clinical trial design and evidence based outcomes in the study of liver diseases. Retrieved from https://www.researchgate.net/figure/The-pyramid-of-evidence-represents-a-general-hierarchy-of-preferred-clinical-study_fig1_24028875
As you can see from the pyramid, randomized control trials (RCTs), where individuals are randomly assigned to groups that receive a new treatment/intervention, an existing treatment/intervention, or a placebo/control, are viewed as the “gold standard” of health-related research. However, this is an area of controversy in the field of population health. Specifically, the debate about the feasibility and appropriateness of RCTS for population health research has revolved largely around four themes: 4, 5
- the difficulty of conducting RCTs for complex programmatic population health interventions, many of which require more flexible, community-driven approaches; it’s seldom feasible to randomly assign neighbourhoods or communities as units of analysis;
- the difficulty of conducting RCTs for policy research (i.e., randomly assigning jurisdictions to be the recipients or non-recipients of population-level policies);
- the difficulty of interpreting results, especially negative findings; and
- the tendency to downgrade the contribution of non-RCTs, including observational studies, which are often the only practical design for assessing the impact of community-based public health interventions.
Effective population health research necessitates a more pragmatic approach to the use of research designs and methods that are appropriate and efficient for the questions/issues under consideration. In practice, this requires the use of multiple, complementary methods including qualitative research and observational designs. Moreover, it’s of equal, if not greater, importance to assess the implementation and sustainability of population health interventions in light of social and political factors that enable or inhibit change.5
2. What outcomes can I expect if I implement this research?
Evaluating the adequacy of evidence about a population health intervention needs to include an examination of the range of outcomes considered. The evaluation criteria should help to determine whether the measured outcomes address:
- the interests of the people who might be involved in deciding on or delivering the intervention and (importantly) those affected by it;
- unanticipated as well as anticipated effects of the intervention (beneficial or otherwise);
- the efficacy of the intervention, as well as its effectiveness, and the distributional equity of the impact (i.e., ensuring that the benefits of an intervention are distributed equally among a priority population).
Identification of the appropriate range of outcomes that should be included in a piece of evaluative research is one part of a pre-evaluation procedure known as evaluability assessment.6 Evaluability assessment was developed in the program evaluation field more than two decades ago and has been popularized with health promotion. Evaluability assessment requires consensus about the successful outcomes of an intervention from important stakeholders, including agreement on the types of evidence deemed to be adequate to reach a conclusion on the value of an intervention.
Unintended effects may detract from the intended effects to such an extent that assessment of the success of the intervention warrants revision. Evaluative research that records only the intended outcomes of an intervention may fail to detect other positive or negative consequences.
Efficiency and equity questions have only recently been emphasized in evidence-based medicine, thanks in part to the advent of health equity impact assessment tools.7 The appraisal of evidence on effective public health interventions must determine whether efficiency and equity have been addressed, and if so, how well.
BrianAJackson/iStock/Getty Images
3. Will my target population be able to use this research?
Perhaps a better way of phrasing this question is whether or not the evidence is applicable or “generalizable” to your population of interest.
Three pieces of information are critical to answer this question:
- the nature of the intervention (key components, how it was implemented or delivered);
- the intervention context (characteristics of those receiving the intervention, including those who demonstrated a benefit and those who did not) and the broader social, cultural, and political environment where it was implemented;
- potential interactions between an intervention and its context (e.g., how the characteristics of a community where an intervention was implemented could affect its impact).
Interactions between interventions and contextual factors can have two implications. First, they are likely to affect the generalizability of the intervention (i.e., what works in Community A may not work in Community B). Second, interactions greatly complicate systematic reviews, which pool the results of different studies. Criteria for assessing evidence on public health interventions should, therefore, ascertain whether contextual interactions have been sought, understood, and explained.8 Where strong interactions are known to exist between an intervention and its context, it can be preferable to explore and explain their separate effects rather than pooling the findings.
Varesovich/iStock/Getty Images
References
- Rychetnik, L., & Wise, M. (2004). Advocating evidence-based health promotion: Reflections and a way forward. Health Promotion International, 19(2), 247–257.
- Rychetnik, L., Frommer, M., Hawe, P., & Shiell, A. (2002). Criteria for evaluating evidence on public health interventions. Journal of Epidemiology & Community Health, 56(2), 119–127.
- Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Chicago: Rand McNally & Company.
- Goodstadt, M.S., Hyndman, B., McQueen, D.V., Potvin, L., Rootman, I., & Springett, J. (2001). Evaluation in health promotion: Synthesis and recommendations. In Rootman, I., Goodstadt, M.S., Hyndman, B., McQueen, D. V., Potvin, L. et al. (Eds.), Evaluation in Health Promotion: Principles and Perspectives (pp. 517–534). Copenhagen: World Health Organisation.
- Sanson-Fisher, R.W., Bonevski, B., Green, L.W., & D’Este, C. (2007). Limitations of the randomized controlled trial in evaluating population-based health interventions. American Journal of Preventive Medicine, 33(2), 155–161.
- Thurston, W.E., Graham, J., & Hatfield, J. (2003). Evaluability assessment: a catalyst for program change and improvement. Evaluation and The Health Professions, 26(2), 206–221.
- Povall, S. L., Haigh, F. A., Abrahams, D., & Scott-Samuel, A. (2014). Health equity impact assessment. Health Promotion International, 29(4), 621-633.
- Minary, L., Alla, F., Cambon, L., Kivits, J., & Potvin, L. (2018). Addressing complexity in population health intervention research: the context/intervention interface. Journal of Epidemiology and Community Health 0, 1–5. doi: 10.1136/jech-2017-209921.