Operations Research

Operations Research

Operations Research

Operations Research (OR) is an approach used to improve service delivery or to strengthen other aspects of programs. Although OR can include diagnostic or evaluative studies, the most common use of OR is the intervention study, consisting of five steps:

  1. Identifying problems related to service delivery;
  2. Identifying possible strategies to address these problems;
  3. Testing these strategies under quasi-experimental conditions;
  4. Disseminating the findings to program managers and policymakers; and
  5. Using the information to improve service delivery programs (Fisher et al., 1991).

Available Operations Research resources include:

This approach is particularly useful in testing new and potentially controversial strategies to service delivery. The implementing organization can experiment with the new approach on a limited scale, without having to adopt it throughout the organization. If the strategy (intervention) proves ineffective or creates unwanted political backlash, then the organization can decide to discontinue it and pursue alternative approaches, at relatively little political cost. If the intervention proves effective and acceptable to the population in question, then the organization can use these results to justify the adoption/ expansion of the intervention within the organization. Moreover, the results of a successful OR project may prompt other organizations to adopt the same intervention in their own programs.

The evaluation of OR should address both process and impact. In the past there was relatively little evaluation of OR, in part because most OR projects are designed to evaluate an intervention. Should one then “evaluate an evaluation?” To the extent evaluation of OR projects occurred, it tended to measure outputs and, with a few notable exceptions (Solo et al., 1998), there was little systematic assessment of impact: the extent to which the OR study resulted in changes in service delivery procedures or policy.

In 1992-93, an OR working group, convened under The EVALUATION Project, proposed a set of indicators to evaluate OR studies (later published in Bertrand and Brown, 1997). This work paved the way for the development of a more complete set of indicators under the FRONTIERS Program; evaluators tested these indicators in various countries between 1999 – 2001.

The indicators presented in this database measure both how well a study is carried out (“process”) and the extent to which a study results in changes in service delivery procedures or policy (i.e. “impact”). In addition, the set includes indicators of context, which describe factors that facilitate or hinder the conduct of OR and the utilization of results; they are useful in explaining what has (or has not) happened, but — in contrast to the indicators of process or impact– they are not scored.

To those interested in more systematically tracking “what happens” as a result of OR studies, this list of indicators should prove beneficial. For others, the exercise may seem too academic and the list of indicators too extensive. Although a full list of indicators developed to evaluate OR projects is presented, users are encouraged to select a subset of these indicators most relevant to their own needs.

Methodological Challenges of Evaluating OR

To meet the conditions of plausible attribution, the change in service delivery or policy must:

  • Be instigated by persons familiar with the OR results
  • Take place after the OR study; and
  • Be consistent with the results and recommendations of the OR study.
  • “Impact” is generally defined as change attributable to the project, but OR is generally only one of many influences in decision-making.

OR has been a great catalyst in the field of family planning, and it is now playing an increasingly large role in other areas of reproductive health (RH). Notwithstanding, an OR study alone rarely results in a major change in service delivery or policy, and demonstrating cause and effect is virtually impossible when evaluating the impact of an OR study on the service delivery environment.

  • Decision-making is a complex and not necessarily rational process.

Studies on the role of research findings in decision making have shown that many other competing factors influence decision-making (Trostle, Bronfman, and Langer, 1999; Anderson et al., 1999; Iskandar and Indrawati, 1996). Program managers and other key decision-makers will only consider implementing recommendations from research they consider to be of high quality, conducted by reputable researchers, consistent with organizational values and needs as well as with social and political context, and able to provide an adequate solution to a recognized problem with available resources. Other less concrete factors such as personal relationships with researchers (Trostle, Bronfman and Langer, 1999) or job security also affect decisions. Context is not easy to measure; yet evaluators must consider it because of its important role in the translation of research recommendations into program and policy change.

  • The term “policy change” covers a large range of actions that differ substantially in their potential impact.

Policy includes formal government declarations, laws, and statutes which those working in policy refer to as “Policy with a capital P.” In addition, policy can refer to the regulations, guidelines, norms, and standards of a given organization (which some label as “policy with a small p”). Within the same country, policies can be enacted at different levels of the program and by different processes. Thus, the methodological challenge for evaluating the effects of OR projects on policy is to establish a working definition of the type of policy that will be considered relevant in making this judgment. One possible criterion for defining a “policy change” is that the change in regulations, guidelines, norms, or standards be implemented system-wide within the organization conducting the research.

  • Usually evaluators cannot measure impact until two to three years after the intervention is completed; however, in the course of this delay, other factors may intervene.

While there is no golden rule for how long to wait to evaluate the impact of an OR study, at least two or three years are usually needed to allow adequate time for an organization to adopt and institutionalize changes based on the research. An alternative is to wait for all OR studies in a given program to end, and then evaluate them as a group. However, a time lapse of much more than three years may allow too many other changes to take place that might further complicate an evaluation. Due to high staff turnover, evaluators may find it difficult or impossible to contact and interview important informants. Many contextual changes may occur in this time and may further complicate the question of attribution.

  • The responses of key informants are by definition subjective.

The indicators presented here rely on three primary data sources: key informant interviews, project documents, and site visits to observe innovations adopted as a result of an OR project. While key informants attempt to be objective, by definition their answers come from their own perspective. To minimize the bias of subjectivity, the evaluator should interview several individuals regarding the given study to increase the credibility to the information. Where disagreement occurs, evaluators may seek more information from other sources, but ultimately must use their best judgment, because they lack a systematic way to “weight” the opinions of two key informants.

  • The checklist of indicators does not adequately measure or reflect the importance of the dissemination of results.

OR is conducted with a purpose: to use the results to improve programs. Thus, a necessary (though insufficient) condition is that appropriate audiences learn about the results and use them in designing their own programs. For example, in 1997, a small Guatemalan NGO tested the use of a standard days method (SDM) necklace to help Mayan couples correctly practice the rhythm method. Use failure rates were low, and most couples (who had wanted to use rhythm) found the method very satisfactory. The researchers widely disseminated the results of this study to other groups working with both Mayans and non-Mayan (or ladino) populations, with the result that the Ministry of Health subsequently included SDM in the range of contraceptives it provides.  In short, dissemination is a crucial part of the OR process. Without effective dissemination, OR cannot influence service delivery or policy as it was designed to.

The OR Conceptual Framework

The figure below from Turning research into practice; suggested actions from case studies of sexual and reproductive health research (WHO, 2006), presents a simplified pathway from research to institutionalization of evidence-based practice. The success of this pathway depends on ensuring that appropriate media are used to disseminate findings to the target audiences. Where relevant, and once steps appropriate for policy making have been initiated, the findings are introduced into practice guidelines. The findings could thus be used to inform sexual and RH policies or program development and strengthening. They could equally serve to advocate for implementation of best practices. In some instances, and depending on the nature of the study, primary findings can be used to develop and test interventions. Successful interventions are subsequently promoted through training. Such interventions could further be integrated into health systems through an adaptation and adoption process and scaled-up for wider application. At the sexual and RH system level, pertinent issues, problems and needs emerge or arise as part of a dynamic process for ensuring efficiency, effectiveness and quality of services. These feed back into global or national research questions and priorities.

(Click image to enlarge.)

OR Framework

___________

References:

Anderson, M., J. Cosby, B. Swan, H. Moore, and M. Broekhoven. 1999. “The Use of Research in Local Health Service Agencies.” Social Science and Medicine 49: 1007-1019.

Bertrand, J.T. and L. Brown. 1997. The Working Group on the Evaluation of Family Planning Operations Research. Final Report. The EVALUATION Project. University of North Carolina, Chapel Hill: Carolina Population Center.

Fisher, A., J. Laing, J. Stoeckel, and J. Townsend. 1991. Handbook for Family Planning Operations Research Design. New York, NY: The Population Council.

Iskandar, M. and S. Indrawati. 1996. Indonesia: Utilization of Completed Operations Research Studies. Final report. Jakarta: The Population Council.

Solo, J., A. Cerulli, R. Miller, I. Askew, and E. Pearlman. 1998. Strengthening the Utilization of Family Planning Operations Research: Findings from Case Studies in Africa. New York, NY: The Population Council.

Trostle J., M. Bronfman and A. Langer. 1999. “How Do Researchers Influence Decision-makers? Case studies of Mexican Policies.” Health Policy and Planning 14, 2: 103-114.