We are squeezed to death, between the two sides of that sort of alternative which is commonly called a cleft stick.
(William Cowper, 1782)
In the second of my personal reflections on the Evaluation revisited conference which took place in May 2010 in Utrecht, The Netherlands, I consider the relationship between complexity and evaluation, particularly theory and practice, and talk about the cartoonist, present at the conference.
Complexity and evaluation: in a cleft stick?
In the plenary session, one of the diamond standards of the new approach to evaluative practice is the way in which it is supposed to take into account complexity and emergence. Complexity was ‘embraced’ by the conference because it was felt to expand participants’ perspectives, show that all things are interconnected and unpredictable, open new opportunities, and allow evaluation practitioners to focus on boundaries of space and time. The simple, complicated, complex and chaotic (of the Cynefin framework) are with us at the same time, all the time.
But this pairing of complexity and evaluation does have a fundamental tension, putting us in a cleft stick. In a cleft stick? This is an idiomatic phrase, first used figuratively in literature in 1782, which recognises a position where advance and retreat are both impossible. If you’re in a cleft stick, you are in a real fix.
If you argue, as Chris Mowles does (see his comment on a blog post by Pete Cranston on Complexity theory, development and IKM Emergent) that complexity theory
Offers us a release from our fantasies of prediction and control… [and] the idea of utilizing complexity theory to bring about a desired change is too redolent for me of the of the instrumentalism I am trying to void.
Then there is an intrinsic paradox between complexity and evaluation. Now I understand that there are different approaches to complexity than that espoused by Chris Mowles and that others argue that monitoring and evaluation can adapt to perspectives of complexity. For example, Ben Ramalingam, Harry Jones and colleagues in their 2008 paper Exploring the science of complexity: ideas and implications for development and humanitarian efforts consider that complexity has the following implications for monitoring and evaluation (M&E):
What is needed is higher levels of flexibility in the funding of international aid work, involving less stringent ‘targets’ and requirements from donors. The role of M&E would be shifted to value learning from unexpected outcomes. This is at the heart of the participatory approach to M&E developed by IDRC called outcome mapping.
But it does seem to me that if evaluation is all about prediction and control – and based on a linear cause-effect paradigm – that there is a fundamental paradox if complexity is something that can’t be harnessed, utilised or even embraced. The blinding implication of this is that using the perspectives of complexity to improve current approaches to evaluation is going to be tricky – to put it mildly. And what this all says about the sore point of attribution is something that I’m not even going to speculate on here! Now I’m not saying that we shouldn’t evaluate – evaluation is, of course, here to stay – but if we are going to pay any attention to complexity at all, it should at least make us aware of the importance of a broad range or perspectives and of the importance of social processes: something which this conference was trying to promote in evaluative practice.
The gap between theory and practice
Another observation that I made was the difference between the general expression that complexity should be part of the ‘diamond’ standard of evaluation and the current approaches/methodologies of evaluation which were presented in the methods market at the conference itself. In this sense, I felt that the evaluation of IKM being undertaken by Chris Mowles and now Anita Gurumurthy was more innovative than I had realised because they are taking a complexity perspective and that this is reasonably unusual. Both the Initial evaluation review (2008) and the Interim report (2009) can be consulted. However, I do recognise that there needs to be ‘space’ – time, budget, and willingness to be part of an unpredictable process – for such an approach, something that doesn’t usually exist in development interventions which have to demonstrate value in a short time-frame.
Finally, one of the very nice things about the conference was the presence of the cartoonist, Auke Herrema. I knew about the existence of this cartoonist from the work that he had done with the Smart toolkit for evaluating information projects, products and services (see some of his work under images on the Smart toolkit website) but it was a total delight to see him drawing at the conference. His cartoons were both an entertainment and a summary of the proceedings.