At the conclusion of writing evaluation reports, we generally present our findings to key stakeholders. This format usually involves a presentation with the aid of PowerPoint or Prezi that includes findings provided at the level of an Executive Summary. In the written presentation materials, we generally include: the purpose and process of the evaluation; evaluation methods and sources of data; key evaluation questions and answers to these questions; highlights of findings with key charts, quotes or other data; and a review of recommendations and/ or next steps. We generally leave time for questions and answers either during or at the end of the presentation. To us, the contents of what we include in the presentation and how we communicate that to clients seems obvious—describe what we did, answer the questions, discuss anything that is significant, etc. However, we want to make sure that our expectations of what we should present or discuss matches their expectations. So, the question is: how do we know that our expectations match?
Recently, we had an opportunity to present findings to one of our clients at two separate meetings: the first presentation was a Steering Committee of primary stakeholders, including board members, the Program Director, and other individuals responsible for implementing the program. The second presentation was to a group of program participants. We often don’t have the opportunity to present findings to program participants, so it was a chance for us to understand if our expectations of what they wanted to hear about the evaluation met with their expectations. To do this, we created a quick, very simple feedback form that took them 1-5 minutes to complete for the second presentation. The feedback form had 3 main questions that they rated on a 4-point scale of Strongly Agree to Strongly Disagree. The questions focused on three primary areas: Accuracy (i.e., did the evaluation reflect their own experience as a participant?); Fairness (i.e., did the evaluation adequately consider all data?); and Usefulness (i.e., did the presentation provide them with useful information?). We also added a general open-ended question to solicit additional comments about the presentation. The forms were printed in half-sheets and distributed during the presentation to collect instant feedback.
In analyzing the responses from participants, they were generally in agreement that the presentation provided them with useful information that was both accurate and fair. However, this was not always the case and therefore the open-ended comments were particularly interesting to us to see where we could have improved the type or level of information we provided to better meet their needs. In reflection of this process, I am grateful that we had the opportunity to collect some quick feedback. In the future, I would definitely want to incorporate this additional step when presenting to larger groups of people (e.g., more than 50) when it would be impossible to have more in-depth discussions of evaluation results that we can do more easily in smaller groups, such as with Advisory Boards. I believe the additional effort was important because of the following key elements: 1) soliciting feedback regarding the presentation from participants sends the implicit message that you value their feedback and reinforces their investment to participate; 2) collecting this information in a quick and easy format reduces the burden on them, also reinforcing the value of their time; 3) reviewing their feedback provides valuable information about how to improve future presentations that we would not be able to collect as inexpensively or as easily as this process. For these reasons, evaluation teams should consider incorporating something as simple and easy as this feedback process into their presentations to facilitate their own learning and hence improve their practice.