Monitoring democratic governance programmes: Evolution over revolution

Graeme Ramshaw, WFD’s Director of Research and Evaluation, describes how our new approach to M&E finds space for flexibility within the confines of the DFID logframe.

Desire for change

The current fashion in monitoring and evaluation, particularly within the governance sector, is to argue the inadequacies of our current tools and approaches. Consistent across all of these critiques is that existing methods inhibit flexibility and adaptability and reward simple accountancy over complex change. The necessity of meeting targets creates incentives for programmes to select indicators that are easy to measure rather than those that demonstrate the full extent of a programme’s impact. These proxy indicators give an artificial specificity to processes of change that are far more nuanced than a small handful of tangentially relevant numbers could convey.

And for the most part, these criticisms are spot on. Almost ten years after the 2006 White Paper that put the concept of governance at the forefront of DFID and other donors’ agendas, we are still using inadequate measures to assess progress. But the problem with many of these critiques is that they offer no useful alternatives to the current mainstream models. Evidence drawn from numerous programmes is distilled down to rather generic recommendations about context, local ownership, and iterative design. Relying on these almost universally accepted concepts, however, means they often don’t engage with the fundamental rationales behind the dominance of current monitoring and evaluation approaches and why donors are so wedded to them.

Tyranny of the Logical Framework?

Logical frameworks have their roots in social psychology literature of the 60s and the 70s (Nakamura and Russell-Einhorn 2015). And they serve two critical functions for institutional donors, particularly governments. They provide clarity on both the deliverables and attribution. The importance of these two concepts in the context of public sector budgeting and spending cannot be overestimated.

Indeed, the calls for adopting less restrictive approaches to monitoring and evaluation could not have picked a worse time. The austerity pervading the British government and others around the world has heightened the demand for clarity about what donors are getting for their money and how the VFM for one project compares to any other possible use of those funds. Results that are difficult to deliver and for which both success and attribution are unclear are at a big disadvantage.

Likewise, DFID’s ring-fenced budget has brought with it increased scrutiny as domestic cuts to services and benefits begin to impact UK citizens at home. Yet, while its funding increases, its staffing relative to that funding declines, putting fewer and fewer people in charge of greater and greater amounts of programming budget. This means less time to engage in the kind of responsive programming that DFID’s own evidence base suggests is the best approach to achieving sustainable reform. Rather, staff must fall back on tools that enable them to track progress in the most time-efficient way, generally through quantifiable indicators.

Any approach adopted by a programme funded by an institutional donor therefore has to find a way to engage with the logical framework, rather than rebel against it. This means finding innovative ways to insert flexibility into the overall rigidity of the model and to attempt to have the logical framework serve the programme rather than the other way around. At WFD, we have worked with DFID to test a number of new approaches that merge old and new forms of monitoring and evaluation that will hopefully allow us to better illustrate the contribution we make to democratic strengthening worldwide.

Contextualising ‘Impact’

To do this, we took the three main elements of the logframe in turn. At Impact level, there’s been an increasing move toward using high level indices (WGI, Freedom House, etc) to measure achievement. This is highly problematic because these indices are designed to capture trends not incremental change. Over the typical two- to three-year time horizon of WFD’s programmes, the data from these indices are mostly useless. Moreover, for smaller organisations like WFD, it is unrealistic to expect that our relatively small levels of investment are going to tilt broad governance indicators at national level.

While the preference was to dispose of indices all together, we ultimately settled on a compromise whereby the Impact level of a programme would be measured by two indicators, one of which would be index-based. However, programmes are encouraged to look beyond the traditional global indices to explore other options that might be better suited to the programme objective. For instance, in our corporate level logframe, WFD is using the Varieties of Democracy project’s Liberal Democracy Index as its components most closely align with our core objectives.

The second indicator relies on qualitative stories of change written using a consistent methodology across programmes. These contextualise the impact that WFD is making, demonstrating how changes at institutional level can benefit individuals and organisations through policy advocacy and implementation. Together these two indicators present a picture of how the context is trending over the period and how WFD is specifically making a contribution to or against that trend depending on the context, enabling us to situate our work more effectively.

Capturing processes as outcomes

At Outcome level, the big issue was indicators that are black/white, pass/fail. While this may work (emphasis on may) with other development projects in terms of service delivery, it is manifestly inappropriate for institutional development questions we deal with at WFD in terms of parliaments and parties. Measuring democracy, or even changes in democratic processes, is often nuanced and context-specific.

Aggregating this across a varied portfolio of programmes, spanning parliamentary representatives, staff, civil society, and political parties, is a challenge. Setting absolute standards masks achievement at the lower end of the scale and over-incentivises selecting partners or problems that are at or near the intended targets at the baseline. Likewise, allowing too much flexibility leaves the programme as a whole unable to demonstrate its cumulative impact, undermining its relationship with donors.

So, we wanted something that would enable us to measure along a spectrum but at the same time give DFID something they could assess systematically. In response, we’ve developed a hybrid solution that seeks to meld a variety of evaluation approaches. This will produce data that both captures changes at programme-level across all WFD interventions and aggregates sufficiently well that DFID can see and assess WFD’s broader achievement as an organisation. We are calling this approach the outcome matrix approach.

The outcome matrix approach draws on existing achievement rating scale approaches and combines it with elements of the outcome mapping methodology to create a measurement tool that enables each programme to identify its progress markers and assess themselves against them over time. DFID used to have its Achievement Rating Scale whereby it judged projects’ progress against its stated outcomes. It allowed for movement over time within projects and gave some means of comparing across projects as well. The problem was it was never clearly defined as to what each score meant. We’ve tried to mitigate that by mixing in the outcome mapping approach where projects establish their expectations in the beginning with their stakeholders.

The outcome matrix approach will encourage the teams use outcome mapping methodologies to think through what they need to see, expect to see, would like to see, and would love to see and develop a basket of progress markers for each. This way each programme is working toward targets that are appropriate for them and we are judging them on the merits of their own progress rather than their progress relative to others. These four baskets will then be assigned a score from 1 (lowest) to 4 (highest) to enable aggregation across programmes akin to the old achievement rating scale.

Throughout implementation, the outcome matrix will be monitored and updated, if necessary, based on events. At the end of each year, each programme will report progress as a score, providing evidence in their annual report to support their self-assessment. This will be reviewed by central M&E staff and by external evaluators as part of the Annual Review process.

Defining the contribution

Lastly at Output level we wanted to switch the conversation away from counting activities and focus on what parliaments and parties actually gain from interacting with us. This enables us to have a more direct theory of change about how we create the right conditions for democracy to be consolidated and practiced. We determined that WFD has three principal approaches to catalysing change through its programmes.

First, WFD provides its partners and direct beneficiaries with relevant, in-depth knowledge and technical expertise on parliamentary democracy drawn from practitioners and politicians across the UK and global political spectrum. Second, WFD links its partners and direct beneficiaries to networks and international platforms that promote inclusive democratic policies and practice. Lastly, but perhaps most importantly, WFD brokers relationships within political spheres, recognising that in many cases, the impediments to democratic change lie not within the structure or rules of institutions but in the relationships between political institutions and how these create informal ‘rules of the game.’

Adding this layer to our monitoring work takes us beyond merely counting number of participants or number of products produced. It makes us think more deeply about exactly how we’re expecting our partners to benefit from our programmes and how they can translate it into their parliamentary and political party work. It also allows us to have much more meaningful conversations around value for money and why, in a crowded international development field, our programmes offer a unique contribution.

Going forwards

There is general recognition that monitoring and evaluation for governance work remains an inexact science. It seems, however, that rather than collaborating to adapt and evolve existing approaches to meet the needs of practitioners in the governance field, different segments are proceeding in opposite directions. On one hand, the critiques mentioned earlier in this piece have called for abandonment of logframes and renewed emphasis on qualitative methodologies. On the other, cadres within DFID have pressed on with ever more quasi-scientific, quantitative tools that seek certainty where, perhaps, there is none.

A better approach may be to agree that, as with so many other things in this area of work, adaptability is the critical factor. Where objectives lend themselves to being measured quantitatively, let us do so. Where objectives lend themselves to being measured qualitatively, let us do so. And where a mixed approach is needed, develop one. Let the nature of the programme guide us in how we monitor it rather than our own preconceptions of the evidence we expect to see. But let us also not lose sight of a sense of realism that reminds us that we are all ultimately beholden to the political economy of our donors whose need for predictability and accountability is ever present. As appealing as a revolution may be, evolution is more likely to take us where we want to go.

Featured image: Kate Dollarhyde

You may also like