Graeme Ramshaw, WFD’s Director of Research and Evaluation, reflects on the M&E team trip to the American Evaluation Association conference last month.
2015 is the International Year of Evaluation. By year’s end, well over 80 evaluation-related conferences and events will have been held across the world. One of the largest of these events, the annual American Evaluation Association (AEA) conference, was held in Chicago earlier this month, and WFD was in attendance.
We were presenting a mixed method evaluation of our parliamentary programme in the Democratic Republic of Congo under the new Democracy and Governance stream within AEA. The presentation illustrated the need to look beyond conventional methodologies when evaluating performance in complex programmatic and contextual circumstances. It also emphasised the inherent challenge in applying evaluation frameworks retroactively, in the absence of clear logic and theories of change.
In many ways WFD’s experience echoed core themes of a conference where participants were both celebrating the increased profile the International Year had provided to evaluation and lamenting the persistent tension between programme implementers and evaluators. The assumption that evaluation exists merely for accountability at the end of the programme cycle clearly remains pervasive among many programme managers in a variety of organisations and fields. At the other extreme, a number of conference sessions documented efforts to run impact evaluations whose conception preceded the awarding of an implementation contract. Clearly, a balance needs to be struck.
At WFD, we are adapting the way we do evaluation to integrate better with our programme design processes. Beyond reviewing logframes, we are working with programme teams to identify their intended outcomes through a more iterative process involving their stakeholders as much as possible. By developing strong theories of change at an early stage, we can make our monitoring, and ultimately our evaluation more meaningful, testing assumptions and interrogating instances where the programme logic has deviated from its expected path.
It is too early to assess the impact of the International Year of Evaluation. But we hope that it includes greater recognition among policymakers and implementers of the role of good programme and evaluation design at the outset as enablers of good evaluation at the end of the programme. WFD and our colleagues in Europe look forward to working with our North American counterparts in the AEA Democracy and Governance stream to move this agenda forward and ensure that the advances of 2015 are not forgotten in 2016 and beyond.