Per Petya Kangalova ’s suggestion that we collaborate out in the open…

I’ve been doing some thinking about results data use (perhaps not surprising given my new gig!) At the last TAG, we talked a lot about how the <result> element should be tweaked, but I’m wondering if that same group and others would be interested in hashing out a roadmap for doing results better, as publishers, as tech vendors, and as advocates for seeing the standard used maximally. Perhaps this could be thought of as a follow on to earlier sessions at previous TAGs or a space to tackle recent issues related to results.

I’m sure I’ll leave people out but off the top of my head, I’m remembering Herman van Loon , SJohns , Pelle Aardema , Mike Smith , and Steven Flower have been active in these types of threads.

Maybe there’s two discussions here - one more technical, one more practical in terms of guidance, use cases, examples, lay-speak, engaging M&E community more fully, etc.

Thoughts?

Comments (11)

Bill Anderson
Bill Anderson

If there is to be another session on results wouldn’t it be better to focus on the usage of results data beyond the M&E community. How, for example does one compare all results in a sector in a country? Is it possible to compare results between publishers?

Yohanna  Loucheur
Yohanna Loucheur
Image removed. bill_anderson:

How, for example does one compare all results in a sector in a country? Is it possible to compare results between publishers?

While this is an important discussion, it seems to me that it would go beyond the scope of the IATI standard. Issues related to comparison, aggregation etc of results relate to the content (e.g. which indicators are used) rather than the container (e.g. the IATI standard). The IATI standard is agnostic about the choise of results indicators; these discussions should be happening in the relevant thematic/sector expert communities.

On the other hand, perhaps there are issues related to how the IATI standard is used (e.g. how data is published in IATI datasets) that make things even harder when trying to compare or aggregate data across publishers. That could well be worth discussing.

Reid Porter
Reid Porter
Image removed. YohannaLoucheur:

While this is an important discussion, it seems to me that it would go beyond the scope of the IATI standard. Issues related to comparison, aggregation etc of results relate to the content (e.g. which indicators are used) rather than the container (e.g. the IATI standard).

My read of the TAG session guidelines implied that we should prioritize content over container. I wasn’t involved in alllll of the results discussions last year or this particular Partos IATI Results workshop, but the conclusion from both seems to be “the result element is good enough to be getting on with.”

I’m trying to think of what can actually be achieved in such a workshop setting. We can definitely share worked examples (sounds like Herman van Loon has already volunteered, and while I don’t know if she’s coming, practical use cases like the one Thea Schepers shared in Copenhagen would be most welcome). We could also share challenges, lessons learned, etc. much like the Partos workshop I referenced above did. But could we actually get to set of action items for how to “do better,” whatever that looks like given our particular relation to the standard? I’m envisioning something like an action plan that sets reasonable goals for incrementally improving on results data by publishers/tech firms, and documented use cases that would be aimed beyond those in IATI-verse to show outsides the potential benefits. Meh?

If there’s other results-related proposals brewing, please jump in. Taryn Davis - anyone coming from Dev Gateway/RDI? leo stolk - seems I left you off the initial list of potentials;-) Anyone interested in hopping on the phone later this week to hash something out?

Herman van Loon
Herman van Loon

I would be happy to share some of our IATI results pilot work, which includes the visualization of IATI results data from multiple publishers. The challenge for the TAG discussion is i.m.o. to avoid methodological M&E discussions and focus more on practical use cases.

Reid Porter
Reid Porter

Have you already submitted something? If not, wanna join me in this Google Doc? I stubbed out what I’m thinking + left room for you Pelle Aardema and Herman van Loon to do your song and dance:)

Still waiting to hear from a few others but I’ll send them in that direction. Thanks!

Reid Porter
Reid Porter

Perfect, thanks Herman van Loon . I think Taryn Davis from Dev Gateway and her Results Data Initiative colleagues are going to weigh in and then I’ll submit later tonight/early tomorrow. Cheers!


Please log in or sign up to comment.