Home » Glossary
Glossary
This glossary provides you with definitions of some terms and concepts used in Enhancing Program Performance with Logic Models. All terms are listed in alphabetical order below.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
Accountability: Responsibility for effective and efficient performance of programs. Measures of accountability focus on (1) benefits accruing from the program as valued by customers and supporters (2) how resources are invested and the results attained.
Anonymity: An attempt to keep the participants unknown to the people who use the evaluation and, if possible, to the investigators themselves.
Assets: Strengths, opportunities, valuable quality or thing.
Assumptions: The beliefs we have about the program, the participants, and the way we expect the program to operate; the principles that guide our work. Faulty assumptions may be the reason we don’t achieve the expected outcomes.
B
Baseline: Information about the situation or condition prior to a program or intervention.
Benchmarks: Performance data used either as a baseline against which to compare future performance or as a marker of progress toward a goal.
C
Cluster Evaluation: A type of evaluation that seeks to determine the impacts of a collection of related projects on society as a whole. Cluster evaluation looks across a group of projects to identify issues and problems that affect an entire area of a program. Designed and used by the W. K. Kellogg Foundation to determine the effectiveness of its grant making.
Confidentiality: An attempt to remove any elements that might indicate the subject’s identity.
Context Evaluation: A type of evaluation that examines how the project functions within the economic, social, and political environment of its community and project setting.
Cost-Benefit Analysis: Process to estimate the overall cost and benefit of a program or components within a program. Seeks to answer the question “Is this program or product worth its costs?” Or “Which of the options has the highest benefit/cost ratio?” This is only possible when all values can be converted into money terms.
D
Developmental Evaluation: Evaluation in which the evaluator is part of a collaborative team that monitors what is happening in a program, both processes and outcomes, in an evolving, changing environment of constant feedback and change.
E
Effectiveness: Degree to which the program yields desired/desirable results.
Efficiency: Comparison of outcomes to costs.
Empowerment Evaluation: Use of evaluation concepts, techniques, and findings to foster improvement and self-determination. Program participants maintain control of the evaluation process; outside evaluators work to build the evaluation capacity of participants and help them use evaluation findings to advocate for their program.
Environment (external factors): The surrounding environment in which the program exists and which influences the implementation and success of the initiative, including politics, climate, socio-economic factors, market forces, etc.
Evaluation: Systematic inquiry to inform decision making and improve programs. Systematic implies that the evaluation is a thoughtful process of asking critical questions, collecting appropriate information, and then analyzing and interpreting the information for a specific use and purpose.
F
Formative Evaluation: Conducted during the development and implementation of a program, this evaluation has as its primary purpose the providing of information for program improvement.
G
H
I
Impact: The social, economic, and/or environmental effects or consequences of the program. Impacts tend to be long-term achievements. They may be positive, negative, or neutral; intended or unintended.
Impact Evaluation: A type of evaluation that determines the net causal effects of the program beyond its immediate results. Impact evaluation often involves a comparison of what appeared after the program with what would have appeared without the program.
Impact Indicator: Expression or indication of impact. Evidence that the impact has/is being achieved.
Implementation Evaluation: Evaluation activities that document the evolution of a project and provide indications of what happens within a project and why. Project directors use information to adjust current activities. Implementation evaluation requires close monitoring of program delivery.
Indicator: Expression of what is/will be measured or described; evidence that signals achievement. Answers the question “How will I know it?”
Inputs: Resources that go into a program including staff time, materials, money, equipment, facilities, volunteer time.
J
K
L
M
Measure/Measurement: Representation of quantity or capacity. In the past, these terms carried a quantitative implication of precision and, in the field of education, were synonymous with testing and instrumentation. Today, the term “measure” is used broadly to include quantitative and qualitative information to understand the phenomena under investigation.
Mixed Methods: The use of both qualitative and quantitative methods to study phenomena. These two sets of methods can be used simultaneously or at different stages of the same study.
Monitoring: Ongoing assessment of the extent to which a program is operating consistent with its design. Often means site visits by experts for compliance-focused reviews of program operations.
N
O
Outcome Evaluation: A type of evaluation to determine what results from a program and its consequences for people.
Outcome Monitoring: The regular or periodic reporting of program outcomes in ways that stakeholders can use to understand and judge results. Outcome monitoring exists as part of program design and provides frequent and public feedback on performance.
Outcomes: Results or changes of the program. Outcomes answer the questions “So what?” and “What difference does the program make in people’s lives?” Outcomes may be intended and unintended; positive and negative. Outcomes fall along a continuum from short-term/immediate/initial/proximal, to medium-term/intermediate, to long-term/final/distal outcomes, often synonymous with impact.
Outputs: Activities, services, events, products, participation generated by a program.
P
Participatory Evaluation: Evaluation in which the evaluator’s perspective carries no more weight than that of other stakeholders, including participants, and the evaluation process and its results are relevant and useful to stakeholders for future actions. Participatory approaches attempt to be practical, useful, and empowering to multiple stakeholders and actively engage all stakeholders in the evaluation process.
Performance Measure: A particular value or characteristic used to measure/examine a result or performance criteria; may be expressed in a qualitative or quantitative way.
Performance Measurement: The regular measurement of results and efficiency of services or programs.
Performance Targets: The expected result or level of achievement; often set as numeric levels of performance.
Personnel Evaluation: Involves an assessment of job-related skills and performance.
Policy Evaluation: Evaluation of policies, plans, and proposals for use by policy makers and/or communities trying to effect policy change.
Probability: The likelihood of an event or relationship occurring, the value of which will range from 0 (never) to 1 (always).
Process Evaluation: A type of evaluation that examines what goes on while a program is in progress. It assesses what the program is.
Product Evaluation: The evaluation of functional artifacts.
Program: An educational program is a series of organized learning activities and resources aimed to help people make improvements in their lives.
Program Evaluation: The systematic process of asking critical questions, collecting appropriate information, analyzing, interpreting, and using the information in order to improve programs and be accountable for positive, equitable results and resources invested.
Q
Qualitative Analysis: The use of systematic techniques to understand, reduce, organize, and draw conclusions from qualitative data.
Qualitative Data: Data that is thick in detail and description; usually in a textbook or narrative format.
Qualitative Methodology: Methods that examine phenomena in depth and detail without predetermined categories or hypotheses. Emphasis is on understanding the phenomena as they exist. Often connoted with naturalistic inquiry, inductive, social anthropological world view. Qualitative methods usually consist of three kinds of data collection: observation, open-ended interviewing, and document review.
Quantitative Analysis: The use of statistical techniques to understand quantitative data and to identify relationships between and among variables.
Quantitative Data: Data in a numerical format.
Quantitative Methodology: Methods that seek the facts or causes of phenomena that can be expressed numerically and analyzed statistically. Interest is in generalizability. Often connoted with a positivist, deductive, natural science world view. Quantitative methods consist of standardized, structured data collection including surveys, closed-ended interviews, tests.
R
Random Number: A number whose value is not dependent upon the value of any other number; can result from a random number generator program and/or a random numbers table.
Reliability: The consistency of a measure over repeated use. A measure is said to be reliable if repeated measurements produce the same result.
Reporting: Presentation, formal or informal, of evaluation data or other information to communicate processes, roles, and results.
Response Rate: The percentage of respondents who provide information.
S
Self-evaluation: Self-assessment of program processes and/or outcomes by those conducting or involved in the program.
Situation: The context and need that give rise to a program or initiative; logic models are built in response to an existing situation.
Situational Analysis: A systematic process for assessing needs (discrepancy or gap between what exists and a desired state) and assets (qualities or strengths) as a foundation for program priority setting.
Stakeholder: Person or group of people with a vested interest–a stake–in a program or evaluation, including clients, customers, beneficiaries, elected officials, support groups, program staff, funders, collaborators.
Stakeholder Evaluation: Evaluation in which stakeholders participate in the design, conduct, analysis, and/or interpretation of the evaluation.
Statistical Significance: Provides for the probability that a result is not due to chance alone. Level of significance determines degree of certainty or confidence with which we can rule out chance. Statistical significance does not equate to value.
Statistics: Numbers or values that help to describe the characteristics of a selected group; technically, statistics describe a sample of a population.
Summative Evaluation: Evaluation conducted after completion of a program (or a phase of the program) to determine program effectiveness and worth.
T
Theory-Based Evaluation: Evaluation that begins with identifying the underlying theory about how a program works and uses this theory to build in points for data collection to explain why and how effects occur.
U
Utilization-Focused Evaluation: A type of evaluation that focuses its design and implementation on use by the intended audience. The evaluator, rather than acting as an independent judge, becomes a facilitator of evaluative decision making by intended users.
V
Validity: The extent to which a measure actually captures the concept of interest.