CreatingAnEvaluation

//Source: unknown//

//Success is remaining open to continuing feedback and adjusting accordingly.// //Evaluation gives you this continuing feedback you need for success.//

1. Understand, verify or increase the impact of products or services on customers or clients. 2. Improve delivery mechanisms to be more efficient and less costly. Evaluations can identify program strengths and weaknesses to improve the program. 3. Verify that you're doing what you think you're doing. 4. Facilitate management's really thinking about what their program is all about, including its goals, how it meets it goals and how it will know if it has met its goals or not. 5. Produce data or verify results that can be used for public relations and promoting services in the community. 6. Produce valid comparisons between programs to decide which should be retained, e.g., in the face of pending budget cuts. 7. Fully examine and describe effective programs for duplication elsewhere.
 * Program evaluation can:**

It often helps to think of your programs in terms of inputs, process, outputs and outcomes.
 * Inputs are the various resources needed to run the program, e.g., money, facilities, customers, clients, program staff, etc.
 * The process is how the program is carried out, e.g., customers are served, clients are counseled, children are cared for, art is created, association members are supported, etc.
 * The outputs are the units of service, e.g., number of customers serviced, number of clients counseled, children cared for, artistic pieces produced, or members in the association.
 * Outcomes are the impacts on the customers or on clients receiving services, e.g., increased mental health, safe and secure development, richer artistic appreciation and perspectives in life, increased effectiveness among members, etc

Do you want to know more about what is actually going on in your programs, whether your programs are meeting their goals, the impact of your programs on customers, etc? You may want other information or a combination of these. Ultimately, it's up to you.

But the more focused you are about what you want to examine by the evaluation, the more efficient you can be in your evaluation, the shorter the time it will take you and ultimately the less it will cost you (whether in your own time, the time of your employees and/or the time of a consultant).

There are trade offs, too, in the breadth and depth of information you get. The more breadth you want, usually the less depth you get


 * Key Considerations:**


 * Consider the following key questions when designing a program evaluation.**

1. For what purposes is the evaluation being done, i.e., what do you want to be able to decide as a result of the evaluation? 2. Who are the audiences for the information from the evaluation, e.g., customers, bankers, funders, board, management, staff, customers, clients, etc. 3. What kinds of information are needed to make the decision you need to make and/or enlighten your intended audiences, e.g., information to really understand the process of the product or program (its inputs, activities and outputs), the customers or clients who experience the product or program, strengths and weaknesses of the product or program, benefits to customers or clients (outcomes), how the product or program failed and why, etc. 4. From what sources should the information be collected, e.g., employees, customers, clients, groups of customers or clients and employees together, program documentation, etc. 5. How can that information be collected in a reasonable fashion, e.g., questionnaires, interviews, examining documentation, observing customers or employees, conducting focus groups among customers or employees, etc. 6. When is the information needed (so, by when must it be collected)? 7. What resources are available to collect the information? What do I want my evaluation to tell me?


 * Think of the evaluation as a set of questions and the methods for answering them.**

You will notice that these steps **do not begin with “What data do you want to collect?**” We urge clients to set that discussion aside until well into the design discussion – Step 4. Thinking about data collection first can limit you in several ways: by calling to mind data you have been asked to report in the past rather than information that will answer current questions, by leading you to collect what is readily available rather than what is important, or by locking you into preconceptions of what constitutes “data.”

A final consideration is **who should be involved in developing this evaluation design**. The short answer is representatives of all the major stakeholder groups. Program and organizational leaders, especially, must be involved from the beginning of the design and must understand what specific activities and constraints the evaluation will entail. If they are not involved you risk not having their support for implementation of evaluation activities or their not finding your questions important or your evidence persuasive.

1. **Determine the parameters of what I want to evaluate** – A common mistake is thinking too small. What’s the larger effort of which I want to answer questions?

2. **Identifying the important questions** – What do I want to know. What do other important audiences want to know? Get input from important audiences into the questions you will ask. At the same time, discuss expectations and obligations for using the answers.

//Articulate the logic behind research project.//


 * Think about 4 design areas when identifying questions:**


 * 1) The program model you are developing - Do our program activities and relationships reflect the values and practices we are promoting in the content of the program? How will practices spread, or “scale up,” beyond our initial client sites? Does our model support these later stages, or are we primarily an “early adopter” program?
 * 2) **Program implementation** - Does the audience we anticipated for the program participate in it? If not, why not? Is our work well coordinated with the work of other initiatives? What kind of readiness do we need in order to implement? Is it realistic to expect this kind of readiness and support?
 * 3) **Short-term program outcomes** - Do participants leave our program with the changes in knowledge, skills or attitudes we hope for? Who does and who doesn’t? Do participants’ on-the-job practices change after they participate in this program? What do the changes look like? change their written policies as a result of using our program’s data in decision-making?
 * 4) **Long-term impact** - Are the knowledge, skills or attitudes of students changed by the changes in practice of participating teachers? Whose are and whose aren’t? How are subsequent career choices of participants affected by their involvement in this program? Do changes in relationships among the organizations collaborating on this project extend to other areas of interaction?

3. **Develop concrete examples of what success would look like.** – This is not data collection this is what one might see, hear, touch etc. Create both positive (when the answer is yes) and negative (when the answer is no) indicators.

At this stage you may realize that different stakeholders have different visions of what the program is trying to achieve.

4. **Identify what information you and others would accept as answers to your questions and evidence of whether or not success has been achieved.** - Do not limit yourself to thinking of data as only numbers. Data can include information obtained in interviews, structured observations, or work group documentation. Data are whatever people both inside and outside the program would consider evidence of the presence or absence of your indicators. Look for unobtrusive and existing evidence, things that can be collected or observed without extraordinary procedures. You will likely want to collect some evidence that is not going to occur naturally in the course of carrying out your program. Look for procedures for collecting this evidence that may simultaneously improve the program.

5. **Determine the most appropriate methods and staffing for collecting, analyzing and considering this evidence.

(Original page by Mary Frangie)