Evaluation revisited II: complexity and evaluation in a cleft stick?

 

We are squeezed to death, between the two sides of that sort of alternative which is commonly called a cleft stick.
(William Cowper, 1782)

In the second of my personal reflections on the Evaluation revisited conference which took place in May 2010 in Utrecht, The Netherlands, I consider the relationship between complexity and evaluation, particularly theory and practice, and talk about the cartoonist, present at the conference.

Complexity and evaluation: in a cleft stick?
In the plenary session, one of the diamond standards of the new approach to evaluative practice is the way in which it is supposed to take into account complexity and emergence. Complexity was ‘embraced’ by the conference because it was felt to expand participants’ perspectives, show that all things are interconnected and unpredictable, open new opportunities, and allow evaluation practitioners to focus on boundaries of space and time. The simple, complicated, complex and chaotic (of the Cynefin framework) are with us at the same time, all the time.

But this pairing of complexity and evaluation does have a fundamental tension,  putting us in a cleft stick. In a cleft stick? This is an idiomatic phrase, first used figuratively in literature in 1782, which recognises a position where advance and retreat are both impossible. If you’re in a cleft stick, you are in a real fix.

If you argue, as Chris Mowles does (see his comment on a blog post by Pete Cranston on Complexity theory, development and IKM Emergent) that complexity theory

Offers us a release from our fantasies of prediction and control… [and] the idea of utilizing complexity theory to bring about a desired change is too redolent for me of the of the instrumentalism I am trying to void.

Then there is an intrinsic paradox between complexity and evaluation. Now I understand that there are different approaches to complexity than that espoused by Chris Mowles and that others argue that monitoring and evaluation can adapt to perspectives of complexity. For example, Ben Ramalingam, Harry Jones and colleagues in their 2008 paper Exploring the science of complexity: ideas and implications for development and humanitarian efforts consider that complexity has the following implications for monitoring and evaluation (M&E):

What is needed is higher levels of flexibility in the funding of international aid work, involving less stringent ‘targets’ and requirements from donors. The role of M&E would be shifted to value learning from unexpected outcomes. This is at the heart of the participatory approach to M&E developed by IDRC called outcome mapping.

But it does seem to me that if evaluation is all about prediction and control – and based on a linear cause-effect paradigm – that there is a fundamental paradox if complexity is something that can’t be harnessed, utilised or even embraced. The blinding implication of this is that using the perspectives of complexity to improve current approaches to evaluation is going to be tricky – to put it mildly. And what this all says about the sore point of attribution is something that I’m not even going to speculate on here! Now I’m not saying that we shouldn’t evaluate  – evaluation is, of course, here to stay – but if we are going to pay any attention to complexity at all, it should at least make us aware of the importance of a broad range or perspectives and of the importance of social processes: something which this conference was trying to promote in evaluative practice.

The gap between theory and practice
Another observation that I made was the difference between the general expression that complexity should be part of the ‘diamond’ standard of evaluation and the current approaches/methodologies of evaluation which were presented in the methods market at the conference itself. In this sense, I felt that the evaluation of IKM being undertaken by Chris Mowles and now Anita Gurumurthy was more innovative than I had realised because they are taking a complexity perspective and that this is reasonably unusual. Both the Initial evaluation review (2008) and the Interim report (2009) can be consulted. However, I do recognise that there needs to be ‘space’ – time, budget, and willingness to be part of an unpredictable process –  for such an approach, something that doesn’t usually exist in development interventions which have to demonstrate value in a short time-frame.

Cartoons
Finally, one of the very nice things about the conference was the presence of the cartoonist, Auke Herrema. I knew about the existence of this cartoonist from the work that he had done with the Smart toolkit for evaluating information projects, products and services (see some of his work under images on the Smart toolkit website) but it was a total delight to see him drawing at the conference. His cartoons were both an entertainment and a summary of the proceedings.

Advertisements

2 Responses

  1. Sarah, you’re right to point to the different conclusions one would reach if one thought that one could accommodate insights from the complexity sciences into existing ways of thinking, rather than, as I am proposing, taking a more radical view. The two examples you chose are good illustrations.

    So Ben and colleagues at ODI stil privilege the interests of funders and project holders and assume they are able to predict the outcome of development initiatives so many years into the future. I am assuming that what actually happens in development interventions will arise as a result of the interaction of many, many people, and not just the those of the funders and project holders. In fact, local, contextual processes are likely to be dominant, not abstract schemes and project proposals conceived at a distance.

    My conclusion would not be that funders need to be more flexible and less stringent with their targets, but that they shouldn’t set targets at all. To set a target for a development intervention implies a cybernetic understanding of human interaction which I reject (see more on this at http://www.reflexivepractice.wordpress.com) . It is also an attempt to predict the unpredictable. Targets do not work, they are made to work. They can come to dominate discussion about what is happening in social development at the expense of what may be important to the people on behalf of whom we are intervening in the first place. They also shape what it is and is not possible to say and do.

    In this respect I consider IKME to be a very radical programme and a good example of a more radical approach to development. Programme participants have started out with plans, although in some cases these are quite general, but in many instances have been allowed to adapt and respond to the circumstances they find themselves having to deal with, sometimes abandoning some of what they intended altogether.

    In my view this entirely alters the evluative task and renders the idea of targets for development deeply problematic. It is indeed an iimportant question to find out what people intended and why they intended it, but what did they find themselves doing instead and what account do they give of this? Did they achieve their targets or not is quite interesting as a line of enquiry, but not nearly as interesting as paying attention to the interplay of intentions that arose as they took up their plans. In my view this latter approach has the potential for a far more complex understanding of the difficulties inherent in social development than merely paying attention to whether targets have been hit or not.

    Your thoughts are perceptive as ever.
    Chris

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: