Planning a program evaluation
matching methodology to program status
As Federal agencies have increasingly specified the methodology expected in the program evaluations that they fund, the long-standing debate about what constitutes scientifically based research has been resurrected. In fact, there are no simple answers to questions about how well programs work, nor is there a single analytic approach to evaluate the wide variety of possible programs and their complexities. Evaluators need to be familiar with a range of analytic methods, and it is often necessary to use several methods simultaneously, including both quantitative and qualitative approaches. Some evaluation approaches are particularly helpful in the early developmental stages of a program, whereas others are more suited to situations in which the program has become more routinized and broadly implemented. One of the key points that we stress in this chapter is that the evaluation design should utilize the most rigorous method possible to address the questions posed and should be appropriately matched to the program's developmental status. The warnings not to evaluate a developing program with an experimental design have been sounded for some time [Patton, M.Q. (2008). Utilization focused evaluation (4th ed). Thousand Oaks, CA: Sage; Lipsey, M. (2005). Improving evaluation of anticrime programs. Washington, DC: National Academies Press], but this is the first time to our knowledge that the developmental phases of program development have been specified in detail and linked to evaluation designs.To guide design decisions, we present a framework that builds on the goal structure devised by the Department of Education's (DoE) Institute of Education Sciences (IES). It is important to note however that designing a program evaluation is often a complicated process, and the ultimate design rarely comes straight out of a textbook. Rather, design decisions require responsiveness and judgment particular to each setting, given the practical constraints of time and resources. Our intent is therefore not to be prescriptive, but rather to provide guidance, tools, and resources to help novice evaluators develop evaluation designs that are responsive to the needs of the client and appropriate to the developmental status of the program.
Hamilton, J. , Feldman, J. (2014)., Planning a program evaluation: matching methodology to program status, in J. Elen (ed.), Handbook of research on educational communications and technology, Dordrecht, Springer, pp. 249-256.
This document is unfortunately not available for download at the moment.