Collecting data for the Aid Transparency Index
Here at Publish What You Fund we’ve spent the last four months collecting data for the 2013 Aid Transparency Index. It’s a burdensome task but it’s highlighted to me just how far the aid transparency movement has come since we piloted our Index in 2011.
My first role at Publish What You Fund was collecting data for the 2011 pilot Index, a job which gave me an insight into the practical difficulties of finding and comparing aid information. Back then, we worked with 49 CSO partners to assess the transparency of 58 organisations. What became immediately clear was the lack of information available and how difficult and time-consuming it was to find what is published. If it was there at all, information was often hard to locate on donor websites, was patchy and was often locked away in PDF documents or other non-machine readable formats, meaning you can’t easily compare it with information you’ve found for another donor.
At the time when I was doing this work, the IATI standard had just been agreed, and the concepts of data comparability and interoperability were quite new to some donors. I remember conversations with donors who couldn’t understand why they should declare whether a project was a grant or a loan, since all their assistance was grants. (Didn’t I know all their unique lending mechanisms and why would you compare a grant to a loan anyway?). It didn’t matter if information was in PDFs, it was still public.
The 2011 Index taught me a great deal about the value of data comparability. At the end of the data collection process, I sat down with a colleague to standardise all the pieces of information we had collected, ensuring we had scored donors fairly across the board. This amounted to checking the availability and consistency of over 5,000 individual data points. We were trying to use specific pieces of information to make a wide-ranging assessment, from an office in London with a high-speed internet connection and with a good understanding of the development landscape, and it was a real struggle. If this was donors making their aid transparent then they weren’t making it easy.
In 2012 we used a web-based data collection platform to store and present the data, but the process was still entirely manual due to the vast range of different places and formats that donors publish aid information.
This year things have begun to change. We’ve designed a tool which automatically collects and tests the quality of the IATI data that donors are publishing. But make no mistake; whilst information being published to IATI has greatly increased since 2011, there is still a vast amount of manual data collection needed. What the collection tool really highlights is the amazing properties that IATI data has and the potential it has as the data becomes more comprehensive. By publishing to IATI, donors are providing their information in a standardised, comparable format. They are providing aid information that is useful and more meaningful, because it can be compared across donors, sectors, countries or all three.
In the future, I expect to see more and more IATI data being included in the Index, and each year I expect the manual data collection to become less burdensome. This is a wider reflection of how the aid transparency agenda is moving forward; increasingly donors understand the value of comparable data. The challenge now is to encourage wide-ranging use of the data that is being produced, whilst keeping up the pressure on donors to constantly improve the breadth and quality of their aid information.