Around the MandE table: a cooking lesson?

Much has happened since Simon and I started working on this paper about the monitoring and evaluation of knowledge management (M&E of KM, see original post here) and the cooking lesson continues, for us anyway and hopefully for you too, as in this case there are not too many cooks!

At the M&E cooking class, there's never too many cooks (Photo credits: vår resa)

On the KM4DEV mailing list, there has been a useful exchange on this topic of M&E of KM and this has triggered more reflections on our side to approach this paper. By the way, special thanks for Sarah Cummings, Roxane Samii and Patrick Lambe for getting this discussion going!

Simon just introduced in a blogpost one of our suggested theoretical models to address the different paradigms (what I profanely refer to as ‘world views’) on knowledge management, offering a spectrum from positivist to constructionist and from cognitivist to social learning).

In this post I’d like to share a refined version of the framework that we would like to offer to your scrutiny. This framework will eventually include a series of questions helping to crack the nuts for the M&E recipe, but for now let’s focus on the recipe itself.

When developing M&E activities around knowledge management, from the design phase to the evaluation of this whole process, we suggest following this approach with four distinct phases that play a role in cooking M&E for KM:

  1. Initial appraisal – Sorting the hats and dishing out the invitation.
  2. Framework design – Getting guests aligned.
  3. Implementation – Serving hosts.
  4. Post-assessment – Tasting the recipe and improving it.

The initial appraisal is usually overlooked but contains the precious assessments of the paradigms / world views used to guide the M&E approach as well as the roles and responsibilities that lead to a good governance of these M&E undertakings.

World views are by and large ill-considered as irrelevant philosophical discussions, vastly unspoken of, let alone addressed explicitly. However, they can have crucial repercussions on the perspective taken by various people involved in monitoring of knowledge (management) because the philosophical traditions on which they rest can vary significantly. If these perspectives are not explicitly tackled they can lead to differing opinions, potentially provoking defiance, mistrust and acrimony among the parties involved.

In turn, the misunderstandings and acrimony can lead to additional time to clarify expectations, to amplifying biases (of positive or negative interpretations on more subjective qualitative approaches), reducing the willingness to cooperate on M&E and ultimately to widening the gap that typically characterises Northern and Southern development agents. Addressing these world views is therefore a precondition for philosophical soundness, equilibrium in partnership as well as efficiency and effectiveness.

As for the roles and responsibilities involved in M&E activities, one could consider:

  • Patrons: who are financing or commissioning the M&E activities. A typical example would be the donor of a project.
  • Account handlers: who are steering the activities (sometimes it is the same as the funder, sometimes a team that usually ends up designing M&E activities).
  • Implementers: who are the team leading the activities that we are monitoring;
  • Monitors: who are the team effectively monitoring: collecting data,
  • (Boundary) Partners: who are potentially affected by the activities we are monitoring and may therefore be contacted or interviewed for their inputs.
  • Beneficiaries: who are the ultimate beneficiaries of the activities that we are monitoring, outside of the partners mentioned above.

These roles should not necessarily be discussed in details at this stage but knowing what stake each party has in conducting M&E is central to the success of these undertakings. The clearer the picture will be, the easier it will become to accept the purposes that guide the design of an M&E framework.

Past this stage – hopefully clarified by a joint discussion between parties involved – comes the framework design. This phase defines the recipe that will be served (i.e. the set of M&E activities) and offers a comprehensive view on all the ingredients required to cook the recipe:

  • Purposes for which M&E is conducted;
  • Levels of intervention at which M&E activities are targeting;
  • Monitoring areas (the various aspects that will be monitored);
  • Resource considerations (budget available for M&E, timeframe, capacities required);
  • Tools and approaches to monitor KM activities (the playground!);
  • Final roles and responsibilities (revised and specified in details after careful consideration of all the above).

These have been introduced in the original post but perhaps what matters here is to show our ambition to sort the set of M&E tools and methods in the following table:

The table of tools we propose to adopt

The third phase (implementation) is all about serving guests, i.e. implementing M&E activities: collecting data, analysing them, reflecting upon them to make recommendations and finally using the recommendations to improve decisions and inform activities.

Finally comes the ex-post assessment phase which will give hints on the savour of the dish served and provide ideas for further improvement. As with any sound learning approach, one has to review what happened and draw lessons for ongoing improvement. This stands true even after a collective initial appraisal which, if conducted correctly, should reduce the need for improvement.

Do you want to go on cooking class with us? We are learning by doing, and we believe this is a recipe for learning… You reckon?

Advertisements

2 Responses

  1. […] instance this and that post) and three on IKM-Emergent programme’s The Giraffe blog (see 1, 2 and 3). Simon Hearn, Valerie Brown, Harry Jones and I are on the […]

    • Chris’s comment goes to the heart of the matter – are monitoring and evaluation an end in themselves or the means to an end? If the second then the ‘frame’ is predetermined – by funders, community, experts of individuals. Each will have different end-points in mind, OK if they recognise this, not OK if they don’t. Even more OKif they recognise the importance of each others…..

      If the Monitoring and Evaluation takes the form of an appreciative critical inquiry into an initiative, program or project, then it becomes part of that initiative’s own narrative. The process becomes one of action learning, and the outcome a joint venture between the action team and the research team. Any products (reports!) are indicative steps on a journey rather than a definitve outcome.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: