Empowering Evaluation: Looking beyond the numbers
By: Elizabeth Hoody
As a former grant-maker, I was glad to see that 14 Points talked about the challenge of evaluating anti-domestic violence work in a post by Payal Hathi last May. In my own experiences working in a women’s rights foundation, I saw just how difficult it can be to quantify the impact of women’s rights organizing in numerical terms. While numbers can tell an important story (such as how many young women receive sexual and reproductive health education), they often leave out what for me is most compelling about a group’s work. This might be the reflections of an individual young woman who now feels that she can talk to her partner about contraceptives or the story of a group of girls who decided to form their own anti-trafficking student organization after participating in a prevention workshop. So while there are many valid and pressing questions about how to “get the numbers right” in program evaluation, my bigger concern these days is how to evaluate impact beyond the numbers…and then again how to aggregate and share this type of evaluation in a way that donors, policymakers, and peer organizations can easily understand.
In the past year, I have come across several creative examples of evaluation strategies that do attempt to move beyond the numbers. Many of these strategies attempt to articulate, verbally or visually, the systemic impact of an organization’s work. One example is a map that was released by the Global Fund for Women just this past week, which captures the Fund’s impact around the world using bright spots. In the Global Fund’s words, this map “explores where a relationship between Global Fund for Women and grantee groups is more likely to yield a higher movement building impact.” While the map does rely on a series of numerical indicators, the visual analysis tells a bigger story about the collective impact of Global Fund for Women grants on women’s rights movements around the world.
The second example is the Gender At Work Framework, which “helps organizations see their work from new perspectives by combining best practices in organizational development with feminist thought.” One tool that the framework uses is a graph where civil society organizations can plot the different types of social changes they are addressing through their work. The graph places the continuum of “individual versus systemic” change on vertical axis and “formal vs. informal” changes on the horizontal axis, resulting in four quadrants of change:
- Women’s access to resources (quadrant I)
- Women’s and men’s consciousness (quadrant II)
- Informal cultural norms and exclusionary practices (quadrant III)
- Formal institutions, laws, and policies (quadrant IV).
Women’s organizations use the tool to visually represent the changes that they are trying to impact through their work. The graph is also a good indicator of what programs will be easier to quantitatively evaluate (such as programs that work for specific policy changes) and those that will be difficult to track through traditional metrics (how do you measure changes in men’s and women’s consciousness?). What I like most about the Gender At Work Framework is that it encourages grassroots organizations to define evaluation framework themselves at the beginning of their planning processes. When this happens, evaluation shifts from being a chore for donors to being an effective way of tracking and reflecting on an organization’s progress towards its goals. In my personal, unquantifiable opinion, the resulting story more often than not contains richer analysis, more honest reflections, and genuine learning.
For the past year, I have been working on the Advisory Committee of FRIDA – The Young Feminist Fund, which is a newly formed fund that is run for and by young feminist activists. As FRIDA prepares for its first grant-making round, these questions about evaluation are on my mind. How do we tell our story to our first funders and how do we empower the first FRIDA grantees to take ownership over evaluation of their work? Can we be accountable and collaborative in evaluation? As a starting point, we are planning to develop the first grant evaluation process in partnership with FRIDA’s first round of grantees. As we begin to hash out exactly what the process will look like, I am grateful for these new tools that push us towards more creative, dynamic ways of articulating our goals, visions, and impacts.
Cross-Posted from: The Public Policy Blog of the Woodrow Wilson School of Public and International Affairs at Princeton University.