My name is Patrick Moriarty. I’m the director of IRC’s Triple-S project, and am blogging at waterservicethatlast for the first time – although I also blog sporadically on my own site . This blog was started by Stef Smits, head of research in Triple-S but is now being opened up to other project staff – and I hope eventually beyond the project to other IRC (and non IRC) projects and people struggling with the provision of sustainable water services. In the meantime, expect to see at least one new post per week from different members of Triple-S team in Ghana, Uganda and the Netherlands.
Top item on an overloaded agenda at the moment is the upcoming mid-term assessment of our Triple-S (link) project. As we prepare a terms of reference for the exercise we’ve been engaging with a number of external thinkers to help us create something that can meet the dual objectives of judging whether we are on track to achieve the goals we set ourselves at the start of the project; while also providing a mirror in which we can look at and assess the relative success (or otherwise) methods and activities we are using to achieve them.
As with much of IRC’s work, Triple-S exists in a sometimes productive, often uncomfortable, zone of tension. Between, on the one hand, the demands of our donors and our own ideas of ‘good project management’ with all the inherent requirements for clear goals and objectives that are monitored over time; and, on the other, our own understanding of the rural water sector as a complex adaptive system in which the one thing we can be sure of is that whatever predictions we make now for five years down the road – are bound to be wrong!
To tackle this unavoidable paradox, we have adopted (and been permitted to adopt by our donor) a flexible and outcomes based approach to project management. While maintaining a broad set of overarching project goals (expressed as outcomes) that focus on the improved delivery of water services, we have a relatively free rein to develop intermediate outcomes annually, informed by frequent (4 monthly) learning and reflection meetings that involve not only project staff but also important members of ‘learning alliances’ (link) of key sector actors and champions. This allows us, in a nutshell, to remain focussed on our overall vision while being flexible as to how to achieve it – exploring multiple possible actions and following up on those that work while dropping those that don’t.
While a very stimulating – and we strongly believe appropriate and so far successful – way to work, this throws up huge challenges in terms of monitoring and reporting. And, while we believe that there is really no alternative approach to delivering change within a complex adaptive space – one where we are but one actor amongst many and where the only constant is change – it calls for an unusual degree of flexibility and trust from a donor. This flexible and reflexive approach to prioritising activities and outcomes on an annual basis while holding (more or less) fixed the final end of project targets is testing existing systems of project M&E to their limits.
While we can just about cope with this as a project, it will demand a special set of skills from external evaluators, who will have to shift through three years’ worth of constantly evolving plans – and then help us to make sense of them. To behave like conventional project evaluators in assessing progress towards agreed mid and long-term outcomes, while being sympathetic to and able to help us reflect upon our methodology for delivering change.
In the end, our MTA will as always be a compromise. Between the competing demands of donors, managers and practitioners – between the desire for an ‘objective’ measure of progress against milestones and for an effective mirror in which to see and assess ourselves and our actions as we work to achieve that same progress. We hope to have finalised the outline terms of reference for the evaluation in the next week or so – and to have identified a team (probably 2 people) to carry it out for us before mid-March (the evaluation will then take place mainly between mid-March and end May with final report submission in June. If you are an evaluator with an interest in complexity theory and adaptive/learning focussed approaches (or know of any such) …. we’d love to hear from you!
Once the terms of reference are ready they’ll be posted here. In the meantime, if this piece of evaluation related angst has caught your attention, you may be interested to read more about the learning framework that’s been developed for Triple-S – a framework that explicitly tries to provide for both assessment of progress against outcomes and reflection on the approach to change being used.
 Complex adaptive systems are systems in which a large number of independent actors (sometimes called agents) interact with each other according a set of rules or behaviours which can change over time. Complex adaptive systems are known for displaying emergent properties – patterns that emerge from the interaction of the agents but that cannot be predicted in advance. Examples of complex adaptive systems are ecosystems; social systems; companies; ant-colonies; countries; villages – they are everywhere!