University of Wisconsin–Madison

Evaluation 2017: From Learning to Action, Part 1 of 2

By Sarah Carroll

Part 1: Maximizing Usefulness in Evaluation

At the American Evaluation Association Conference in Washington D.C. in early November 2017, I and roughly 4,300 other attendees had the mixed fortune of being able to choose among 800 concurrent sessions and a handful of half- and full-day workshops!

Two of the workshops that I attended were particularly relevant to our work in HR Competencies program development.

Michael Quinn Patton, an independent organizational development and program evaluation consultant, and author of Utilization-Focused Evaluation and Essentials of Utilization-Focused Evaluation, facilitated a half-day workshop on “Utilization-Focused Evaluation.”

Quinn Patton emphasized the importance of how we use evaluation. “Evaluation is the process of knowing whether and to what degree you’re doing good [work],” he said. “We are phenomenal reality distorters. We have selective perception. For example, how do you know if you have bad breath? You have to have someone tell you, and yet it’s taboo to discuss!”

Utilization-Focused Evaluation came out of research in the mid-1970s when evaluators began asking what made evaluations useful. To enhance usefulness in evaluation, Quinn offered the following:

  1. Assess and build programs with organizational readiness for evaluating them. “You don’t just jump into evaluation. You have to get people in and evaluation frame of mind!” He compared this to farming: “If you throw seeds without first tilling soil, you waste your seeds!”
  2. Determine ‘usefulness’ at the beginning, not the end, of a project. He elaborated: “This is counterintuitive, but if you don’t know what you’re going to do with the results before you do in evaluation, you won’t know what to do when you get the results.”
    1. Recall the collapse of the bridge between St. Paul and Minneapolis, which killed thirteen people. An evaluator found cracks during inspection nine months before the collapse. This is an example of data/evaluation that wasn’t used!
    2. Recall Romeo Dallaire’s warning about Rwanda. This was also unheeded, and 800,000 people were killed in genocide.
  3. Determine what affects decision-making. What data do you have? From whom? Create an environment of sharing. Begin to think about what everyone involved knows. Go slow to go fast. Think about relevant metaphors. If working in Minnesota, the land of 10,000 lakes, make it about fishing.
  4. Make the evaluation process systematic and credible. All major decisions are evaluations. Evaluators make a point of making the evaluation process systematic and credible.
  5. Start with situational analysis regarding the organization’s readiness to commit to undertaking evaluation. The “activity menu” should include:
    1. Conduct a baseline assessment of the organization’s current evaluation use (note: this surfaces baggage that people bring from past experiences)
    2. Learn how your participants associate with the word, ‘evaluation’
    3. Create positive vision
    4. Consider incentives for and barriers to evaluation.
    5. Get explicit: are you ready to examine the extent to which you’re doing good? This involves and requires relationships.

Quinn noted four critical evaluation standards:

  1. Utility—Evaluation must be relevant, useful, and used! The standard is on the evaluator. He noted Atul Gawande who wrote Complications, who posited that it’s never just one thing but a constellation of things that lead to errors. In the Minneapolis-St. Paul bridge collapse scenario, Patton said that if he had found cracks, it would be his ethical responsibility to follow up. “Your job isn’t done when you turn in your report. You have a responsibility to know how the report is used and to ensure feasibility.”
  2. Feasibility—Evaluation must be realistic, prudent, diplomatic and frugal. Reframe the concept of a ‘report’. Your goal is not the report, but the use of findings. There are all kinds of ways to communicate findings, including presentations, workshops, and conversations. Reports are expensive to do. Think process. Start with situational assessment before proposing any design.
  3. Propriety—Ensure the evaluation is ethical, legal, and respectful.
  4. Accuracy—Ensure that the evaluation is technically adequate.

Patton added that the order of these standards is important. “Usefulness is always first,” he said. “This is a common conflict! Remember, you have to till the soil.” He added a fifth standard, accountability—a meta- evaluation. This is the evaluation of the evaluation.

Utilization-Focused Evaluation is a decision-making framework for enhancing the utility and actual use of evaluations. “It is different from dissemination, different from the report, and doesn’t happen naturally!” Patton exclaimed. “You have to set expectations at the beginning.”

Patton elaborated that it is helpful to utilize a “success case “method: find an organization doing great work, and get data on what they do. Ask them whom they believe they’ve helped—and interview those folks.

Utilization-Focused Evaluation is evaluation that becomes the engine of the entire program. The evaluation itself begins the intervention process. It kicks off the thinking about excellence. This is what we call “process use” versus “findings use.” Utilization-Focused Evaluation reflects the impact of engagement in the evaluation. When you identify indicators, you change thinking about how things get done.

Patton suggested that the evaluation process should drive an organization’s reflection process and evaluation culture. In HR at UW, for example, we ask, “what makes a good HR business partner?”

Utilization-Focused Evaluation is organizational development work! “You have to keep asking: what does it take to make it useful,” Patton said repeatedly.

If you are interested in learning more, please feel free to read this summary of Utilization-Focused Evaluation or reach out to Sarah Carroll.

Did you find what you need?