The blog was written to be accessible by the wider IATI community, so avoids many of the technical details of how we use specific agile processes. For example, we use daily standups, sprint boards and more to manage our work. These processes are under continual review and we aim to be as flexible as possible while maintaining a high standard of work.
In terms of pyIATI, it seems there is some confusion as to what it is. Rather than a specific tool the core focus has, as mentioned previously, been to create a software library to act as a reference implementation of ‘The Standard’. Other tools may then be built upon the library without having to worry about the complexities and issues introduced in historic Standard versions (each of which greatly increases implementation difficulty). It should also be noted that pyIATI will by no means be feature-complete by the end of our current set of sprints.
The project, as required in the workplan section 3A, is based on input from a range of people. This includes feedback from the TAG; the business analysts in the tech team; our past and present experiences of the many codebases we have to maintain; and general feedback about the future of IATI. This input was analysed to come up with the range of user stories as mentioned in the blog post. In total we came up with 50 stories that we ranked using the MoSCoW method, leaving 7 essential features based on 27 user stories. These were added to an internal Trello board and have been used to guide the direction of development.
During development of pyIATI, we’re working to engineer good code through use of test-driven development, documentation, consistent linting, human-readable variable names, SOLID principles and more. Each of these are frequently missing from existing IATI code for a number of historical reasons. With the requirements of the present and future we need to look at how to provide, not for 50 or 500 publishers but for 5000 or 50,000.
Take the Dashboard, for example: it is built with an architecture that is unable to scale much further without having to sacrifice key features such as daily updates - even now it can take over 48 hours for it to update after publishing new data. pyIATI is designed to streamline processes and act as a new architecture that will be able to scale for the future in a way that current tools cannot.
Hopefully this gives a bit more information than could be reasonably conveyed in the blog post. There is due to be another post in the near future providing even more detail as to why pyIATI is so vitally important for the continued success of IATI. This will go into more detail than has been provided here - if there are any more questions we could look to answer them there.