How to monitor and evaluate the impact of knowledge management initiatives? This is the central question of the study for Working Group 3. An intriguing and important question worth further research. On this blog we will share our insights and thoughts on this subject with you. Where to start? It appears important to be clear from the outset on what we mean with monitoring and evaluation on the one hand, and knowledge management initiatives on the other. And how do we see the context, the development paradigm we are working in?
Taking our perspective on the development paradigm, we see this as an approach that organisations take when they wish to bring about change in society, a value-based approach. The context is characterized by the significance of different stakeholders, whether these be donors, communities that may be the intended beneficiaries of our work, the employees of our organisations, or our trustees and board members (all variously known as ‘partners’!). All of these stakeholders have a real and tangible interest in and influence on the outcomes of our work. In this sense, despite the growing appreciation in the private sector of the ‘triple bottom line’ and the wider involvement of different stakeholders in public service delivery, the non-profit sector has a distinctive appreciation of a multiple ‘bottom-line’, and Anheier and others have articulated the law of ‘non-profit complexity’ to reflect this. The values perspective of the non-profit sector is differently articulated, and is sometimes contested. From our perspective we see it as encompassing democratic processes of participation and consultation aimed at achieving positive externalities for society in areas such as poverty, empowerment issues, HIV Aids and other areas that impact particularly in the poorest areas and communities of the world.
What is ‘knowledge management’? The literature suggests that this is more than simply an approach to learning. It is an all-encompassing approach to how an organisation (or nation, or community, etc) deals with the handling of information, knowledge and indeed wisdom, to advance its objectives – whatever they may be. This can include both explicit and tacit knowledge, and also encompasses the use of technology, databases, access to available knowledge through web-based systems, as well as social networks, formal and informal forums for dialogue, communities of practice and other socially-based learning opportunities, whereby structured learning programmes can fall somewhere between the technology and socially-based learning systems.
How do we see ‘monitoring and evaluation’? These are typically viewed as ‘processes’ of ‘measuring’ progress against predetermined goals and objectives, with various intentions (learning from what has happened, indicating success or failure, demonstrating results to funders, informing communities about project processes etc). Arising partly from a ‘log-frame’ perspective, typically many M&E processes have focused on data collection contributing to quantitative indicators (‘so many people attended the training workshops, of whom 66% were women’), through which project success or failure is determined. The limitations of this type of M&E process are obvious. We take the perspective that M&E processes can be much more than this and can contribute much greater value.
If we consider how M&E approaches can be developed to most usefully ‘measure’ progress in implementing KM processes, and then consider KM as involving various components, as identified above, we can see that a range of different methodologies will be appropriate – depending on where the emphasis is of the specific KM approach. How you monitor and evaluate the usefulness of communities of practice may be different from how you monitor the usefulness of a database established by a global policy network. Particular challenges exist in evaluating the effectiveness of KM strategies that focus specifically on tacit knowledge: how can we measure it, and even if we could, how could we measure how tacit knowledge is applied? And how can we measure the impact of socially based learning processes that rely on quality of engagement, purposeful dialogue, community support etc? What is your experience?