Home » Enhancing Program Performance with Logic Models » Section 7: Using Logic Models in Evaluation » 7.15: Evaluation designs
7.15: Evaluation designs
As we think about when to collect data, we are reminded of the research design that will help us to eliminate plausible rival explanations.
Consider the following designs as you further refine your evaluation plan.
- AFTER ONLY (Post Program): Evaluation is done after the program is completed; for example, a post-program survey or end-of-session questionnaire. It is a common design but the least reliable because we do not know what things looked like before the program.
- RETROSPECTIVE (Post Program): Participants are asked to recall or reflect on their situation, knowledge, attitude, behavior, etc. prior to the program. It is commonly used in education and outreach programs but memory can be faulty.
- BEFORE-AFTER (before and after program): Program recipients or situations are looked at before the program and then again after the program; for example, pre-post tests; before and after observations of behaviors. This is commonly used in educational program evaluation and differences between Time 1 and Time 2 are often attributed to the program. But, many other things can happen over the course of a program that affect the observed change other than the program.
- DURING (additional data “during” the program): Collecting information at multiple times during the course of a program is a way to identify the association between program events and outcomes. Data can be collected on program activities and services as well as on participant progress. This design appears not to be commonly used in community-based evaluation probably because of time and resources needed in data collection.
- TIME SERIES (multiple points before and after the program): Time series involve a series of measurements at intervals before the program begins and after it ends. It strengthens the simple before-after design by documenting pre and post patterns and stability of the change. Ensure that other external factors didn’t coincide with the program and influence the observed change.
- CASE STUDY: A case study design uses multiple sources of information and multiple methods to provide an in-depth and comprehensive understanding of the program. Its strength lies in its comprehensiveness and exploration of reasons for observed effects.
Using comparisons
All of the above, one-group designs can be strengthened by adding a comparison–another group(s), individual(s) or site(s). Comparison groups refer to groups that are not selected at random but are from the same population. (When they are selected at random, they are called control groups.) The purpose of a comparison group is to add assurance that the program (the intervention) caused the observed effects, not something else. It is essential that the comparison be very similar to the program group.
Consider the following possibilities as comparisons:
- Between program participants (individuals, groups, organizations) and nonparticipants
- Between different groups of individuals or participants experiencing different levels of program intensity
- Between locales where the program operates and sites without program intervention (e.g., streambed restoration, community revitalization)