Only seeing Reid's input today.
This is somewhat similar to an idea starting to form in my mind about a different approach to data quality assessment. Rather than assessing elements/attributes individually, we could check whether the data is sufficient/good enough to provide answers to specific questions (ie data uses). For one thing, it would highlight the dependencies between data elements, and the importance of attributes (which tend to be overlooked in current assessment approaches). It would also make the impact of quality issues more concrete, especially if the questions are relevant to the publisher itself.
I believe with a set of 8-10 questions we would cover most important use cases (and most elements/attributes). For instance:
- How much will publisher's operational projects disburse in country x in the next 12 months? (this requires planned disbursements broken down by quarter)
- Which national NGOs are involved in the delivery of the publisher's activities? (this requires all the details on the implementing partner)
- Has the publisher implemented projects in support of the following sectors (work needed to create a list of sectors where 5-digits are necessary, eg primary education, basic health infrastructure)? (this would test the use of 5-digit DAC codes instead of 3-digit)
- Has the publisher implemented projects at the district/village/something level (choosing a relevant admin 3 level)? (this would test geographic data)
I'd be happy to work offline with others to develop a set of questions and see how the work as a basis for a quality assessment.