University of Wisconsin–Madison

Evaluation 2017: From Learning to Action, Part 2 of 2

By Sarah Carroll

Part 2: Maximizing Usefulness in Evaluation

At the American Evaluation Association Conference in Washington D.C. in early November 2017, I and roughly 4,300 other attendees had the mixed fortune of being able to choose among 800 concurrent sessions!

Two of the sessions I attended were particularly relevant to our work in HR Competencies program development, and both were sessions outside of the 800 concurrent offerings for which I had to preregister. It was a relief to know I had at least chosen those well.

“Utilization-Focused Evaluation” was a half-day session with Michael Quinn Patton, an independent organizational development and program evaluation consultant, and author of Utilization-Focused Evaluation and Essentials of Utilization-Focused Evaluation.

Recall last month’s review of “Interactive Evaluation Practice” with Jean A. King. Her evaluation practice highlights the importance of how we involve people:

Michael Quinn Patton’s emphasis highlights how we use evaluation. “Evaluation is the process of knowing whether and to what degree you’re doing good,” he said. “We are phenomenal reality distorters. We have selective perception. For example, how do you know if you have bad breath? You have to have someone tell you, and yet it’s taboo to discuss!”

Utilization-Focused Evaluation came out of research in the mid-1970s when evaluators began asking what made evaluations useful. To enhance usefulness in evaluation, Quinn offered the following:

  1. Assess and build programs and organizational readiness for Utilization-Focused Evaluation. “You don’t just jump into evaluation. You have to get people in and evaluation frame of mind!” He compared this to farming: “If you throw seeds without first tilling soil, you waste your seeds!”
  2. Determine ‘usefulness’ at the beginning, not the end, of a project. He elaborated: “This is counterintuitive, but if you don’t know what you’re going to do with the results before you do in evaluation, you won’t know what to do when you get the results.”
    1. Recall the collapse of the bridge between St. Paul and Minneapolis, which killed thirteen people. An evaluator found cracks during inspection nine months before the collapse. This is an example of data/evaluation that wasn’t used!
    2. Recall Romeo Dallaire’s warning about Rwanda. This was also unheeded, and 800,000 people were killed in genocide.
  3. Determine what affects decision-making. What data do you have? Create an environment of sharing. Begin to think about what we all know. Go slow to go fast. Think about relevant metaphors. If working in Minnesota, the land of 10,000 lakes, make it about fishing.
  4. All major decisions are evaluations. Evaluators make this process systematic and credible.
  5. Start with situational analysis: readiness to commit to an undertaking evaluation. The “activity menu” should include:
    1. A baseline assessment of the organization’s evaluation use (this surfaces the baggage that people bring from past experiences).
    2. Participants’ association with the word, ‘evaluation’
    3. Create positive vision
    4. Consider incentives for and barriers to evaluation.
    5. Get explicit: are you ready to examine the extent to which you’re doing good? This involves and requires relationships.

There are four critical evaluation standards:

  1. Utility—must be relevant, useful, and used! The standard is on the evaluator. He noted Atul Gawande who wrote Complications, who posited that it’s never just one thing but a constellation of things that lead to errors. In the Minneapolis-St. Paul bridge collapse scenario, Patton said that if he had found cracks, it would be his ethical responsibility to follow up. “Your job isn’t done when you turn in your report. You have a responsibility to know how the report is used and to ensure feasibility.”
  2. Feasibility—must be realistic, prudent, diplomatic and frugal. Reframe the concept of a ‘report’. Your goal is not the report, but the use of findings. There are all kinds of ways to communicate findings, including presentations, workshops, and conversations. Reports are expensive to do. Think process. Start with situational assessment before proposing any design.
  3. Propriety—make sure it is ethical, legal, and respectful.
  4. Accuracy—ensure that it is technically adequate. Patton added that the order of these standards is important. “Usefulness is always first,” he said. “This is a common conflict! Remember, you have to till the soil.”
  5. Accountability—a meta- evaluation. This is an evaluation of the evaluation.

Utilization-Focused Evaluation is a decision-making framework for enhancing the utility and actual use of evaluations. “It is different from dissemination, different from the report, and doesn’t happen naturally!” Patton exclaimed. “You have to set expectations at the beginning.”

Patton elaborated that it is helpful to utilize a “success case “method: find an agency or organization doing great work, and get data on what they do. Ask them whom they believe they’ve helped—and interview those folks.

Utilization-Focused Evaluation is evaluation that becomes the engine of the entire program. The evaluation itself begins the intervention process. It kicks off the thinking about excellence. This is what we call “process use” versus “findings use.” It reflects the impact of engagement in the evaluation. When you identify indicators, you change thinking about how things get done.

Patton suggested that the evaluation process should drive an organization’s reflection process and evaluation culture. In HR at UW, for example, we ask, “what makes a good HR business partner?” Utilization-Focused Evaluation is organizational development work! “You have to keep asking: what does it take to make it useful,” Patton said, repeatedly.

If you are interested in learning more, please feel free to read this summary of Utilization-Focused Evaluation or reach out to Sarah Carroll.

Did you find what you need?