carnegie hacks

Decreasing your Carnegie learning curve
so you can focus on what really matters

 

Carnegie Hacks are insights and tools to save you time and energy you can use to

advancing CE on your campus, instead. We post them every week, right here.

#35. Telling OUTCOMES apart from IMPACTS

Updated: Sep 7, 2018

In July and August, we are discussing the 4 Carnegie foundational indicator questions about impact. To date, we have explored the addition in the 2020 application items, which now request data about outcomes in addition to the long-standing call for impact data.


Last week, we tackled the Carnegie advice about outcomes, so this week we are tackling the advice they provided about impacts.


While the official Carnegie advice about outcomes we discussed last week stood alone pretty well, the advice about impacts makes so many references comparing itself to outcomes that their advice is easiest to follow when contrasted to the outcomes advice. So, instead of giving you another table containing impacts advice alone, we've provided it in a format that facilitates side-by-side comparison.




Reflections on changes to the impact questions:


After many cycles of asking for impact data, why add a second type of data (outcomes) to this question? Does Carnegie suddenly care just as much about outcomes as it does about impacts?


Perhaps.


Perhaps after years of great data coming in about impacts, the years of wondering what outcomes created those incredible impacts they had read about got the better of them and they couldn't stand the suspense anymore so they had to ask.


But there's another pervasive issue that could have catalyzed this change: imprecision.

While many previous applicants fully embraced the impacts question, others tried to use whatever data they had access to and just claimed it was impact. This put reviewers in the position of speculating whether applicants a) simply hadn't measured impact and were still trying to get credit for it, or b) measured impact but misunderstood the question and selected to share the wrong kind of data. The new request to hear about both outcomes and impacts will help reviewers eliminate this problem and more easily identify the applicants falling into that first category.


A third possibility for the reformulation of this set of questions is this: perhaps an unintended consequence of these well-known and widely-feared questions is an increasing emphasis on measuring impacts with a corresponding de-emphasis on measuring outcomes. While impact measurement tells you whether anything changed in the big picture, outcomes measurement tells you why that big picture change happened or not. Outcomes measurement informs internal program decision-making and improvements, while impact measurements leave us scratching our heads if we don't have access to outcomes measurements to provide context, insights, explanations.


Put another way, outcomes assessment tells us how well our entire endeavor works; impacts assessment tell us what our entire endeavor leaves behind. They go hand in hand. While both are important, outcomes measurements are far more practical in that they are more useful to applicants and probably more insightful to reviewers as well. In fact, outcomes data may have been what the reviewers have wanted since the beginning. The change to this question may simply represent a new tactic to soliciting the info they have always needed, and never quite been able to put their fingers on.


Join us next month as we dive deeper into the constituent-level expectations for the 4 impact questions. First up, Student Impact (and Outcomes)! See you here in August.


Heather Mack Consulting, LLC

carnegie2020@hmackconsulting.com

Getting Carnegie Classified™️ and Heather Mack Consulting, LLC operate independently from and are not affiliated with the Brown University Swearer Center for Public Service or the Carnegie Classification of Institutions of Higher Education.