Frequently Asked Questions About Evaluation

About Program Evaluation

What is program evaluation?

There are several different definitions of evaluation, but generally speaking evaluation is the use of social science research methods to systematically assess the merit, worth, or value of a program. The actual process of evaluation includes a range of activities such as setting goals, identifying indicators to measure program implementation and outcomes, and drawing conclusions about the merit of the program.

I hear the terms “formative” and “summative” a lot, what do they mean?

Evaluations tend to fall into two categories: formative or summative. Formative evaluation typically occurs during the development or beginning stages of a program for the purpose of improving the program. Summative evaluation typically occurs once the program is stabilized for the purpose of informing a judgment about whether to continue, expand, or discontinue the program.

What is the difference between research and evaluation?

Applied research and evaluation are quite similar and often use many of the same methods to collect information.  The clearest distinction between the two forms of social inquiry is that evaluation is client-driven, while applied research is researcher-driven.  Similarly, applied research and evaluation differ when it comes to the intended use: applied research answers research questions that aim to contribute knowledge to the field, while evaluation answers evaluation questions that aim to contribute information about the merit, worth, or value of something. Evaluators work closely with stakeholders (those who have a stake in the object of the evaluation or research such as program decision-makers, implementers, and recipients) to design the evaluation.  Applied researchers ground their research in social science theory, while inclusion of social science theory is not a necessary component of evaluation.  While the production of research and evaluation knowledge is similar, the purpose and use of applied research predominantly differs from evaluation.

What is a program theory and how can it be helpful during an evaluation?

A program theory is the logic behind your program. The program’s theory, often displayed in a visual form such as a logic model, helps to explicitly connect your program’s activities to its intended outcomes and goals.  In describing a program theory the implicit and explicit assumptions related to the program are revealed.  A clear program theory helps you describe your program to others, including potential funding sources and helps guide evaluation of your program. It is a good idea to think about your program’s theory at the beginning stages of the program and/or evaluation.

Do evaluators have to follow a certain set of standards or principles? / Who oversees your work and holds you responsible??

We are accountable, first and foremost, to you, the client. In our profession, our reputation hinges on the quality and integrity of the work we conduct for our clients. We also follow the Guiding Principles for Evaluators published by the American Evaluation Association. These principles ensure that evaluators conduct systematic, honest, and competent evaluations while respecting participants and contributing to general and public welfare. You can read more about the AEA Guiding Principles for Evaluators here.: http://www.eval.org/p/cm/ld/fid=51
Additionally, we also follow the Program Evaluation Standards by the Joint Committee on Standards for Educational Evaluation. These standards include: 1) utility, 2) feasibility, 3) propriety, and 4) accuracy. You can read more about the Program Evaluation Standards here: http://www.jcsee.org/program-evaluation-standards-statements

I’ve heard that I should follow the CDC framework for evaluation. What does that mean?

The Centers for Disease Control and Prevention (CDC) recommends a framework for practical, effective program evaluation. While the CDC works with health-related program evaluations, the framework is implemented with all different types of program evaluations, including the evaluation of education programs. The framework incorporates six key steps: 1) engaging and working with program stakeholders, 2) describe the program, 3) focus the evaluation design, 4) collect credible evidence, 5) justify the conclusions, and 6) ensure the use of the findings. The framework also emphasizes the Program Evaluation Standards listed above. Thorough, high-quality program evaluations will work with you through all of these essential steps.

How do I know what type of evaluation I need?

If you are unsure of the type of evaluation you need, Cobblestone Applied Research and Evaluation, Inc. will work with you to answer that question. Different clients have different needs and no two evaluations are exactly the same. Commonly, evaluation needs fit into the following categories: needs assessment (assessing what needs exist), a process or implementation evaluation (assessing how a program is being implemented), an outcome evaluation (assessing program outcomes), or an impact evaluation (assessing the impact of the program, which includes detecting, interpreting, and analyzing program effects). Cobblestone Applied Research and Evaluation, Inc. can also help you to express your program theory.

Why should I evaluate my program?

Evaluating your program can allow you to objectively measure how well the program is performing. The process can help you to set timelines and goals for your program activities, develop instruments, keep program stakeholders informed of program activities and accomplishments, and seek future funding.

Why do I need an evaluator?

Evaluators are trained and experienced in designing, managing, and executing evaluation studies in a timely and cost-effective manner. Additionally, hiring an evaluator who is external to your organization can increase the objectivity of the evaluation findings, lending the findings more credibility.

What should I look for in an evaluator?

You want an evaluator who is experienced conducting evaluations and has a good track record. Look at their previous work and ask for references. Check to see whether the evaluator has formal training in evaluation. Is the evaluator a member of relevant professional associations like the American Evaluation Association? Can the evaluator articulate their model or philosophy for conducting evaluation? Does their approach to evaluation fit with your evaluation needs? Finally, do you get along with the evaluator? It is important to work with an evaluator who has technical expertise as well as “soft skills” that facilitate a strong, positive working relationship. Having a positive working relationship with your evaluator makes for a more enjoyable and productive experience.

What kinds of programs can be evaluated?

There are no restrictions on the type of programs that can be evaluated.

I have heard that I need a content expert in my field to accurately evaluate my program.  Is this true?

Having a content expert involved in the evaluation is useful, but not necessary. Commonly, the evaluator provides the evaluation expertise while the client provides the expertise about the program and its content. The Cobblestone Applied Research and Evaluation, Inc. team conducts evaluations to the best of our abilities, utilizing the content expertise of our team. In a situation where we feel unable to complete evaluation tasks based on our training, we seek outside consultants to advise our evaluation processes. For example, in one of our recent evaluations of a chemistry curriculum we hired a chemistry professor to craft our chemistry content assessment to ensure that mastery of the exam would accurately reflect a deep understanding of chemistry.

About The Evaluation Process

What should I do to get ready for an evaluation / prepare for the first meeting with my evaluator?

The first conversation that you have with your evaluator will likely focus on your program’s purpose, goals, participants, and activities. You should expect to talk about how you believe specific program activities might be related to program goals. You should also expect to have an initial conversation about the potential key evaluation questions that will focus the evaluation.

Before the evaluation begins, there are a few things that you should agree upon with your evaluator. First, think about the specific services that you would like to receive as part of the evaluation. For example, will the evaluator help you to complete federal reports? Will the evaluator collect and analyze data or will your program staff be expected to assist with collecting data? After you’ve discussed the range of evaluator responsibilities, you will establish an evaluation plan, scope of work, and schedule of invoices. It is important to note that evaluations in university-based programs must be given approval by the university’s Institutional Review Board (IRB), which will ensure the protection of human subjects who participate in the program evaluation.

If you anticipate working with institutional data (e.g., student grades, retention or graduation rates), talk to your evaluation early on in the evaluation about how to secure and access that data.

What should I expect from my working relationship with my evaluator?

Once the evaluation has started, it can serve an important role in keeping the program on track. It is important to establish regular communication appointments with your evaluator early in the evaluation process. These regular meetings (e.g., monthly phone calls) will help to keep the evaluator updated on program activities while making sure that you are well aware of current evaluation activities. You can expect to work with your evaluator to establish data collection and reporting routines to ensure that program staff and administrators stay on track. These routines will also help to make sure that program activity timelines are met and program goals are regularly reviewed.

What type of design should we use to evaluate our program?

There are many different approaches and different designs that can be used to evaluate a program. This wide range of options includes a variety of experimental, quasi-experiment, and nonexperimental designs. The design used to evaluate your program, however, should be determined by the types of evaluation questions you are trying to answer. An evaluator can help you to pick the most appropriate design and measures to best answer your key evaluation questions.

What type of data should we collect?

The type of data that you collect will depend on the evaluation design and key evaluation questions. Some evaluations include quantitative data (e.g., closed-ended survey responses, test scores, or grades), some include qualitative data (e.g., interviews and focus groups), while other evaluations take a mixed methods approach and collect both types of data sources. Thorough evaluations use multiple sources of data to “triangulate” data as a way of establishing reliability in evaluation findings. Evaluators will typically work with you, program staff, and content experts when developing data collection instruments and protocols.

What should I expect to gain from the process of evaluating my programs?

You should expect to receive objective, data-based information about how to improve your program throughout the evaluation process. By the end of the evaluation process, you should expect to have data-based information that can help you to determine how successful your program has been. You should also expect to work with your evaluator to communicate important findings to your program stakeholders.

What does cultural competency mean and how would it be incorporated into our evaluation?

All programs are embedded in multiple levels of culture: program values, beliefs, languages, and shared experiences. As described by the American Evaluation Association, “a culturally competent evaluator is prepared to engage with diverse segments of communities to include cultural and contextual dimensions important to the evaluation. Culturally competent evaluators respect the cultures represented in the evaluation.” Evaluations that value cultural competency will respect the cultures surrounding your program.

How involved do I need to be in the evaluation?

Level of involvement varies according to the evaluation approach used and your preference. At the start of the evaluation, you and your evaluator will agree upon the level of involvement that seems appropriate for that particular evaluation. Involvement can range from very little to a lot, depending on time and interest.

When is the best time to start an evaluation?

We prefer to start an evaluation at the initial planning stages of program development so that we can work with you to create an evaluation plan that can be used for program improvement and assess changes over time. It is rarely too late, however, to conduct an evaluation. The advantage of starting an evaluation in the early stages of a program is the ability to construct program goals that are feasible and measurable and to initiate an ongoing system for tracking program outcomes. An evaluator can work with you to create a program theory that illustrates how specific program activities are linked to specific, measurable program goals.

What if the evaluation finds negative results?

This is a common fear associated with having your program evaluated. Rest assure that evaluations provide information about what your program does well, as well as information about where your program could improve. In all of our years of conducting evaluation we have never come across a program that has no redeemable qualities. Most evaluations are geared toward improving your program and helping your program accomplish its goals. Often times evaluations can feel like they are being done to you, but Cobblestone Applied Research and Evaluation, Inc. will include you in the evaluation process so that there are no surprises at the conclusion of the evaluation.

Do you share the findings of my program with other people?

The results of evaluations are only shared with the consent of the client. Prior to starting the evaluation, we draft a dissemination plan with you that outlines how the results of the evaluation will be shared. This dissemination plan includes the types of reports included in the evaluation (e.g., annual reports for funding agencies or internal reports), the frequency of reports, process of approving reports, and other reporting activities desired. Commonly, we deliver an evaluation report and executive summary to the client. We can also share findings and make presentations to stakeholder groups as needed. Sometimes clients want to pursue writing publishable articles or presenting at professional conferences, both of which we have experience doing with our clients. Additional ways to publicize evaluation findings include press releases, memos, and posting on websites.