Reporting on Democracy Support: Stories vs. Graphs

Commentary

Reporting on Democracy Support: Stories vs. Graphs

WFD embarked on a full overhaul of its monitoring system and launched a number of new tools in April 2019. Now, more than two quarters into this work, we can share some of the lessons we have learnt.
Image
Authors
""

Sonja Wiencke

Humans make sense of things based on stories. However, stories are not a good way to make judgements – they can obscure evidence.

Governance and oversight of the democracy- and governance-support sector  relies heavily  on reporting, specifically reporting in a narrative format (logframes – logical frameworks that detail the objectives, outputs, and activities, etc., of programmes –  tend to supplement a report, as opposed to forming the basis of its analysis). Donors, implementers and auditors all seem to have arrived at an understanding that a well-managed programme involves regular, multi-page documents telling the story of what progress has been made,and providing elaborate explanations of how money was spent and risk managed.

This isn’t surprising, because humans process information not as snippets of evidence, but as stories. However, that type of reporting, which for some parts of the sector have come to be largely synonymous with “monitoring”, is only useful for certain purposes. It is limiting for the following reasons:

  • Information is available only on a quarterly basis, which makes any timely action in response impossible. If we want to be adaptive and politically savvy about how we manage programmes, we can’t wait for three months to pass to follow up on any interesting monitoring data.
  • Reporting costs an enormous amount of staff time, both on the part of people writing them and (hopefully) among those reading them. This distracts from more valuable parts of programme work, including reflection and analysis.
  • Information that is ‘hidden’ in a ten-page report cannot be analysed, aggregated or compared across programmes or portfolios. A few months ago, if I tried to answer a question such as “how effective is our work on women’s political participation across different programmes?”, I would have had to read in the region of 60 reports in order to get some rather vague, albeit beautifully narrated, answers.
  • Information in ten-page reports will be quickly lost from organisational memory. Let’s be honest, how many people are going to read all of these reports, let alone remember them?

With a view to escaping these limits of narrative reporting, WFD embarked on a full overhaul of its monitoring system and launched a number of new tools in April 2019. Now, more than two quarters into this work, we can share some of the lessons we have learnt.

One of the main differences to the conventional way of monitoring is that we now store all data about activity delivery and results in a custom-built database, called the Evidence and Impact Hub.* This database collects and expresses largely the same information that you would expect in a report: details about the activities we deliver – including the aims, results, and participants of activities – and evidence of the real-life changes in democratic institutions that we observe. All we did was translate the key parts of reports into a series of online forms. However, our teams now fill in this information on an almost real-time basis. That means that anyone in the organisation can see what and how well a programme is doing at any time, and can take appropriate action in response – email a colleague to ask them how they achieved an impressive result, follow up with a team that appears to be behind schedule, and find out who does what on women’s political participation.

This is admittedly not how humans make sense of information. Since we are developing the entire MEL system in close consultation with all stakeholders, we have therefore gone from a strictly quantitative database to a version that has both qualitative and quantitative data – we have re-introduced some short paragraphs of storytelling. That way, information needs to be inputted only once, and we can display and export it in a variety of ways to fit the needs of London-based management, donors, and learning-oriented MEL people. We have dashboards, automatic summary reports, and all kinds of customised exports on request.

The point of this story is perhaps that stories will always be how humans make sense of things, but stories need an evidence base. The conventional way of reporting consists almost exclusively of stories – with no specific requirements on the evidence forming its foundation. That makes it harder to assess, compare, analyse, and learn from.

The quantitative data we now have on activities and results is a much more reliable base for a story/report, because every aspect of programme work is represented over time. It does not depend on a human to remember everything accurately at the end of a quarter, and it shows us gaps in our logic or evidence base early enough to fill them.

But the data does need the story to go along with it. Currently, our programmes teams don’t look at a graph and use the information contained within it to change how their programme is running – that does require us to digest, discuss and make sense of the information together.

However, we have made some progress in tackling some of the problems of reporting in the governance and democracy sector:

  • information is available in time to make decisions on adaptation;
  • teams report spending (usually) a significantly lower amount of time on the database than they used to for reporting;
  • information is visible to anyone in the organisation in a format that makes it easy to understand and analyse – I can find information and tell you about our work on women’s political participation within about five minutes;
  • Information is kept so that everyone can query every result and activity ever logged, at any time in the future.

Our solutions may not be perfect or the best fit for everyone, and we’re anxious to keep learning. So, we’d like to invite other stakeholders in the governance sector to share their learning from monitoring – the more we can standardise and align our bases of evidence, the more we can share stories that contribute to sector-wide learning.