As we saw in the previous sections, a logic model shows:
How are resources, activities, participation, and outcomes linked? Simple models often depict a single chain of relationships: A leads to B leads to C. In this section, we will see that multiple paths and directional flows may more realistically depict programs. This series of connections can be called chain of objectives (Suchman, 1967), contingency relationships or outcome hierarchies (Funnel, 2000), program hierarchy (Bennett, 1976; Rockwell and Bennett, 1998), means-end hierarchy (Patton, 1997), chain of outcomes (United Way of America, 1996), heuristic of program objectives (Mayeske, 1994).
We often say that we expect our programs to “cause” the desired change or “produce” the desired results. In fact, many factors affect how our programs develop and occur, and work with and, sometimes, work against our programs. In education and outreach programs, much depends on the participants (target recipients) and their characteristics (including attitudes, motivation, knowledge and learning styles, skills, history), as well as the context within which the recipients live and work. It may be more appropriate to think about our programs as offering opportunities and possibilities (Pawson and Tilley, 1997) rather than “causing” a result.
This depicts the program’s theory of action (Patton, 1997) or theory of change (Weiss, 1998).
“Theory” may sound too academic for some, but it really just refers to:
- Conventional wisdom
“A theory of change is a description of how and why a set of activities–be they part of a highly focused program or a comprehensive initiative–are expected to lead to early, intermediate, and longer-term outcomes over a specified period.”
Anderson, 2000, slide 15
These links provide more information.
We are not talking about “grand theory” but about your expectations and beliefs, either explicit or implicit, about how and why a program works. They may not be widely accepted or even right. They are your hypotheses about what you expect to happen.
These are not absolute truths or direct cause-effect relationships. In the words of M. Q. Patton: “our aim is more modest: reasonable estimations of the likelihood that particular activities have contributed in concrete ways to observed effects–emphasis on the word reasonable. Not definitive conclusions. Not absolute proof” (p. 217).
Webster’s definition: to make different, alter, modify
Programming is about making something different–hopefully better. We can think about programs working to make new opportunities possible, changing the options that are available, helping to improve decision making, changing capacities. As we think about change, however, we want to remember that:
- Positive program outcomes may result in stability, not change.
- Not all change is good; sometimes change upsets natural, positive relationships or further disempowers the powerless. We must be constantly vigilant for issues of equity and potential negative consequences of our program efforts.
- Conflicts may arise between individual vs. public benefit.
Definitions of change and what is considered positive achievement may differ depending on one’s perspective: for different participants, staff members, and funders.
Most programs are based on a theory of change, whether explicit or implicit. Programs are usually designed and implemented based on some rationale, some purpose, some reason for being. Exceptions might be totally spontaneous endeavors; totally inductive approaches that emerge and take shape without any preconceived purpose or expected value. In most cases, however, we have some a priori notion of purpose and expectations.
The idea of causation is central to the logic model. The logic model depicts the program’s assumed causal connections. Yet, cause-effect relationships are problematic in our world of education and outreach programming. Experience shows us that:
- In most all cases, programs have only a partial influence over results. External factors beyond the program’s control influence the flow of events. This applies particularly to longer-term outcomes.
- The myriad of factors that affects the development and implementation of community initiatives make it difficult to tease out the various causal connections. Participants have their own characteristics and are embedded in a web of influences that affect participant outcomes (family relationships, experiences, economy, culture, etc.). The external environment affects and is affected by the program. These many and various factors may come into play before, during, and after program implementation in an almost constant dynamic of influences.
- Seldom is there “one” cause. There are more likely to be multiple cause-effect chains that interact.
- Short project time lines make it difficult to document the assumed causal connections.
- Measuring causal relationships and controlling for contextual factors through experimental or quasi-experimental designs is often not feasible and expensive.
- Data collected through various methods – quantitative and qualitative – often show different (and sometimes contradictory) causal associations. Seldom do we “prove” that a particular outcome is the result of a particular intervention.
- Causal relationships are rarely as simple and clear as the mosquito example above or as the “if-then” relationships suggest. Rather, there are multiple and interacting relationships that affect change, often that function as feedback loops with the possibility of delays (see Rogers, 2000; Funnell, 2000; and Williams, 2002).
Systems theory suggests a dynamic and circular approach to understanding causal relationships rather than a uni-dimensional linear approach. Logic models can be created to depict these more iterative causal mechanisms and relationships either through the addition of feedback loops and two-way arrows or narrative explanations or a matrix. Limitations are imposed by the necessity of communicating on paper in a two-dimensional space.
Remember, the logic model is a “model” – not reality. It depicts assumed causal connections, not true cause-effect relationships. Sometimes, even simple models are very useful. They can help clarify expected linkages, tease out underlying assumptions, focus on principles to test, educate funders and policy makers and move a program into action and learning.