Continuing the discussion from Reflecting on the upgrade process:

Clearly some things (e.g. vaccinations, schoolrooms) are easier to quantify than others (e.g. democracy). That’s not a problem we’re going to solve with XSD.

In the proposed standard:

  • There are (qualitative) comment elements available at the same level as the (quantitative) actual and target elements. There are narrative description elements available at various levels.

  • Disaggregation can be used to represent categorical data like ethnicity.

  • Indicator definitions can also provide nuance and context to what’s being measured.

Beyond that, I’m curious to know what more we could envision doing to support qualitative results. Could you perhaps provide some examples of use cases that the proposed standard doesn’t support?

Here are some stories that these fixes make possible, none of which is possible with the current standard:

  • Someone managing a housing reconstruction project in a post-disaster situation can submit the same IATI report to different stakeholders (donors, home office, host country, etc.) instead of spending their time reformatting their data to accommodate mutually incompatible computer systems.

  • A researcher exploring whether bednet distribution is effective can query IATI-reported data on bednet delivery, across multiple donors/implementers/countries, and compare that with malaria incidence data.

  • A local investigative journalist can compare IATI-reported results (say, on schoolroom rehabilitation) with facts on the ground, and possibly bring corruption to light.

  • The administrator of a large bilateral donor agency can answer to her legislature with precision and detail when they ask what was accomplished with taxpayer money.

I’m not sure what more we could expect from a humble little XML element.

The proposed schema is agnostic and can support outcome data just as easily as it can output data. See https://github.com/IATI/IATI-Codelists/blob/version-2.01/xml/ResultType.xml . If you want, you can report not only outputs (e.g. hours of teacher training provided) but outcomes (e.g. improved math scores) and impacts (e.g. greater social mobility). The XSD doesn’t care.

We have to be careful with attribution - does intervention X get all the credit for improvement Y, and is there even a causal relationship at all? That’s what Ph.D. candidates are for. But researchers and evaluators can’t even get started if we are incapable of counting our outputs; in that case they’re left with crude proxies like dollars disbursed, and that’s the least informative metric of all.

Be the first one to comment


Please log in or sign up to comment.