Summer is a Time For… Professional Development!

All year long we look forward to summer: the longer days, the warmer evenings, the more casual clothes. Summer is also a time when many of our clients have a break from the normal class schedules and take vacations.

For our clients, summer is a key time to offer professional development (PD) opportunities to many of the participants in their programs (e.g., high school teachers, college students) and allow them to focus on acquiring new skills that they can take back to the classroom in the fall. These can include training on a new pedagogy, research project or curriculum. We appreciate the opportunity to attend these PD sessions, as we always learn so much about the field of education, and the challenges and opportunities that teachers and their students encounter each day.

For us, our evaluation work is focused on observing PD sessions, conducting focus groups, administering surveys, and developing new instruments for the upcoming year. Summer is a time for us to put into practice so many of the skills that we do have. However, as the field of evaluation and applied research continues to grow, we also must grow with it in enhancing our toolkit of knowledge in a variety of ways through both formal and informal PD of our own.

While we don’t always have time to take formal courses of study to enhance our own professional development like the programs we evaluate, the informal professional development can include reading books, articles and other online resources as well as training each other on specific skills. For example, this summer,  I am re-reading John Hattie’s books Visible Learning and Visible Learning for Teachers. Both are excellent resources to understanding what factors relate to student achievement.  Our staff are training each other on analysis techniques and software programs that will help us in developing reports as well.

For more formal PD, I encourage our interns and staff to take courses or workshops when they are available. For example, Claremont Graduate University (CGU) offers excellent PD Workshops in Evaluation from August 25-30. In addition to the workshop that I am co-presenting with Tiffany Berry on Introduction to Educational Evaluation, the other workshops cover a variety of relevant topics for both professional evaluators and graduate students. Not only do these workshops provide technical training, but they are also a good time to network with other professionals and catch up with old friends.

There will be other opportunities for PD at conferences and other venues after summer has ended. But summer is a special time since the slower pace allows for fewer meetings and more time for us to write reports as well as plan and prepare for the next school year. What kind of professional development are you planning to do this summer?

Sense of Belonging: Its Importance in Student Success and Relevance to Evaluation Practice

Although underrepresented minority (URM) student enrollment in higher educational institutions has increased over the past decade, significant disparities remain in the retention and graduation rates of these students in Science, Technology, Engineering, and Mathematics (STEM) fields. (e.g., Higher Education Research Institute [HERI], 2010). However, a growing body of research in the social sciences and education provides insight into the factors that enhance URM students’ academic success. Specifically, research from a number of disciplines points to the crucial role of identity and identity related constructs (e.g., sense of belonging) for students’ academic persistence (e.g., Osborne & Jones, 2011). Although some evaluators may believe their role is to solely measure program success according to program theory, we believe that in addition to that, it is our responsibility to use relevant research and one’s own expertise in a given discipline to help programs best address program goals and make adjustments when necessary.

A couple of months ago I wrote a blog post about my personal experience as a social psychologist working in evaluation (“The Intersection of Social Psychology and Evaluation”). At the end of the post, I encouraged evaluators to find ways in which their background and expertise could be used to influence and improve their practice. I have attempted to follow my own advice. The following describes an example of how my research background has influenced the way I view my work and how I approach evaluation practice.

As someone who studies group and intergroup processes and identity, I am constantly observing examples of how identity and identity related constructs come into play in our evaluation work. It is my observation that many STEM higher education programs provide URM students with activities (e.g., STEM field internships) that attempt to create a sense of belonging  or increase students’ level of identification with STEM (i.e., the extent to which the student defines the self through a role or performance in STEM) . Additionally, some processes that occur during program implementation (e.g., cohort tracking) also seem to increase students’ sense of belonging. When programs foster a sense of belonging (and increase identification with STEM), program participants tend to have high levels of implementation and commitment to pursuing a STEM degree. Indeed, Chemers, Zurbriggen, Syed, Goza, & Bearman (2011), found that identity as a scientist (and self-efficacy) mediated the relationship between science support activities (i.e., research experience, instrumental mentoring) and commitment to a career in science.

I believe that evaluators can use research on identity and identity related constructs to help clients understand how and why STEM program activities (e.g., STEM field internships) lead to intended outcomes (e.g., retention, persistence, and graduation). Specifically, evaluators can assess whether program activities and processes that occur during program implementation create, increase, or decrease students’ identification with STEM and sense of belonging. For example, evaluators can assess students initial levels of belongingness and identification using established scales at the start of a program (or as a pretest before specific program activities) and again upon completion of the program. Clients can use this information to develop, modify, and improve program activities and ultimately the success of their program.

 

Chemers, M. M., Zurbriggen, E. L., Syed, M., Goza, B. K., & Bearman, S. (2011). The role of efficacy and identity in science career commitment among underrepresented minority students. Journal of Social Issues, 67, 469-491.

Higher Education Research Institute (2010). Degrees of success: Bachelor’s degree completion rates among initial STEM majors. Retrieved June 20, 2013, from http://www.heri.ucla.edu/publications-main.php

Osborne, J. W., & Jones, B. D. (2011). Identification with academics and motivation to achieve in school: How the structure of the self influences academic outcomes. Educational Psychological Review, 23, 131-158.

Teacher Fidelity to Program Implementation

At Cobblestone, we are wrapping up the first year of a K-12 curriculum study. Over the years, we’ve learned an important piece of conducting a curriculum efficacy study is measuring the implementation of the curriculum or treatment program by our participating teachers. This allows us to better determine if any differences that exist between treatment and control groups are a result of the curriculum. We hope to encourage other evaluators to measure fidelity to implementation by providing information about uses for implementation data, reasons for reduced implementation fidelity, and tips to encourage and measure implementation.

 Uses of Implementation Data

One of the main reasons to collect implementation data is to provide more context for outcome results. For example, implementation context may help explain if no differences are found between treatment and control groups in student achievement. Other uses for implementation data include, among others, the opportunity to determine:

  • If the program was able to be reasonably implemented in the classroom as designed
  • If fidelity to all of the program components were necessary to see results
  • If the program is appropriate for all classrooms/teachers/students

 Reasons for Reduced Implementation Fidelity

Reasons for reduced fidelity from our participating teachers generally fall into one of the three categories below. While we are confident that other reasons exist for a lack of implementation, these three categories explain the majority of the situations we have encountered in K – 12 curriculum studies.

Implementation Requirements Too Demanding

We have found that a primary reason that a lack of implementation fidelity occurs is that teachers do not have enough time to implement the program as prescribed by the publisher/ program authors. Implementation guidelines are often considered too demanding in light of competing interests or activities within the school, district, and/or state. Teachers have to simultaneously meet the needs of their students by balancing their obligations while trying to incorporate a new curriculum. This can be a difficult challenge, especially during the first year of using a new program.

Belief that the Program is Not Appropriate

Teachers may choose to modify, supplement, suspend, rearrange, and/or omit portions of the program to meet the perceived needs of their students. We generally find that teachers feel a program is not appropriate for their students when teachers believe their students either lack certain skills; need additional practice with concepts; or have a lack of interest in program content.

Inability to Implement

Lastly, we find that implementation fidelity decreases because teachers have a perceived or actual inability to implement a new program. A perceived inability to implement the program usually occurs when a teacher feels unprepared to implement the program and, subsequently, reverts back to activities or lessons with which they feel more comfortable. Also, while infrequent, teachers sometimes lack the content knowledge or pedagogical skill to implement a treatment program without additional training that was beyond the scope of the study.

Tips for Managing Implementation Issues

If possible, the research group/ evaluation team should be involved when the program developers are establishing their guidelines for implementing the program. Developing the implementation guidelines is also the time to consider the biggest problem in implementation fidelity: teachers running out of time. We have found that teachers are more likely to implement programs with fidelity when the required components are kept to the absolute minimum.

Implementation may also be affected by the amount and type of training that is provided to teachers. For example, more training would most likely be needed if the program requires a change in pedagogy (e.g., strong emphasis on inquiry-based instruction). The training should be standardized to ensure that all participants receive the same instruction.

During the Study

During the study, implementation issues are more easily identified when teachers report frequently on their implementation. At Cobblestone, we use weekly or monthly implementation logs as the primary method of tracking implementation. While persuading teachers to complete logs can be a difficult task, we have found that response rates improve if the logs are brief (only a few minutes) and only the essential components of implementation are reported.

We have found these additional insights have proved valuable to our clients, especially in updating or modifying their curriculum; thus, we believe that identifying and managing implementation issues will allow us to provide our clients with more meaningful and useful results.