Much confusion in the field of KM stems from the use of the very common word “knowledge.” Let us discern how KM gurus use this word, starting with the guru of all management gurus, Peter Drucker:
- “Knowledge is information that changes something or somebody — either by becoming grounds for action, or by making an individual (or an institution) capable of different or more effective action” – Drucker“Justified belief that increases an entity’s capacity for effective action” – Nonaka
“I define knowledge as a capacity to act” – Sveiby
“Knowledge is information in action” – O’Dell and Grayson
In KM, “knowledge” is capacity for effective action, which includes belief and information useful for effective action. It encompasses whatever helps you do your job well. Thus, information that is not actionable is not knowledge. “Effective action” is the operational, empirical or behavioral indicator of the result of applying knowledge well in a particular context.
The model I showed earlier (where, following the previous post, I expanded “knowledge assets” to “intangibles including knowledge assets”) provides a solid framework for M&E of KM, or better, M&E in the management of intangible assets.
M&E tools in use to track and assess Box 1 (see figure above) include: various intellectual capital accounting methods, knowledge mapping/inventory, social network analysis (SNA), blogs, completely unstructured story telling/listening, lessons-learned session (LLS), corporate knowledge taxonomies, number of uploads to a portal, World Bank’s KAM, etc. These are tools of “supply-driven KM”.
M&E tools in use that pertains to Box 2 include: key performance indicators (KPI), various productivity measures, activity checklists, number of hits of a webpage, action indicators in a project logframe, indicators in a Balanced Scorecard, Malcolm Baldridge measures of performance, etc. At the activity level, it is easy to attribute the action to whichever specific knowledge assets were used, but attribution gets more difficult at the project and especially at the program or organizational levels.
M&E tools in use to monitor and evaluate Box 3 include: number of problems solved, satisfaction scores by internal/external customers, value adding (vs. non-value adding), impact of training on workplace performance, post-project success stories by project beneficiaries, gross sales of a product/service, statistical correlations between knowledge assets and organizational performance measures, key result areas (KRAs), market value of a corporation, satisfaction survey among project stakeholders, etc.
Disaggregation and attribution of organizational results to specific factors are often difficult here. However, if the choice and design of a KM initiative are demand-driven (e.g. to solve a specific problem, to enhance a particular capability, to assist in making a particular type of decision or policy, to increase efficiency of a work process, etc.) then it is easy to devise a measure or indicator to check if indeed that demand or objective was met.