Is ‘Good Enough’ KM OK?

Ian Fry

Knoco Australia Pty Ltd

Presented to ACTKM11 Conference9th October, 2011

Background to the conference

The KM group based in Canberra (Australian Capital Territory) ACTKM has conducted an annual conference for many years. It attracts KM practitioners to 2 days of intensive presentations (mostly workshops) and with about 50 attendees, most of the KM consultants are represented; as well as KM Managers from major organizations.

My slides for the presentation were therefore on-screen prompts so I have written up these notes, in which I can relay some of the discussion

Background to the Presentation

At ACTKM10 last year, Dr Kate Andrews suggested that, although it was some time ago, there was some profit in revisiting Nonaka’s work as there was still great relevance, and it was still a good starting point for those new to KM. As a result I revisited Nonaka’s “Managing Flow” book which I had found helpful in the past, and from that honed a definition of KM (yes – another one!) as

“Moving practical knowledge between people” – because that is what I concentrate on

I'd argue that knowledge management ought not be a *process*. Rather, intuitive, flowing and natural. #actkm11

The advantage with this definition is that in the situation of having to do an “elevator pitch” – this definition works. Even IT people who have put a dashboard on a BI application and claim it as “knowledge management” get the point…

Kate’s “Nonaka” comment appeared on the ACTKM listserv where it was subsequently misinterpreted by John Maloney (Network Singularity in the US) who derided the ACTKM community. I took it upon myself to correct his misconceptions, but in the process he also convinced me that we, as KM people, were not hard enough on ourselves.

So the tag for my comments was “Toughen Up”

The Past

Patrick Lambe (Straits Knowledge – Singapore) had covered this issue back in 2007 in an ACTKM paper “Why Knowledge Managers should be sued”. Little had happened as a result.

The justification of KM – either in advance of a project, or in retrospect; and the assessment of whether it was “good enough” is non-trivial; and in my presentation I set out some thoughts; but the main thing was to get during the session, and throughout the conference thereafter, the practical solutions that people had successfully implemented to address the issues I raised.

Later, I directly challenged them to tell me why their KM was better than I described, and how they would prove it.

(I can release the suspense – very few ideas came back…)

During the main part of the talk I focused on Knowledge Management Quality. At the end I addressed Knowledge Quality as a separate issue.

I used 8 main topics

ROI (Return on Investment)

I felt that we needed to address this up front.

ROI includes assets such as nebulous things as goodwill (Fosters value their brands at many millions of dollars – therefore adding to their Asset Backing). Share market prices are based more on emotion than real value. So this measure does have some problems.

Furthermore, ROI is not necessarily a guide to a good / strong company; and I quoted Enron and News of the World – very strong ROI; but gone for other, “bigger” reasons.

ROI is not an effective measure of all organizations. If the Health system ran on “If ill then euthanize” it would have a near perfect ROI; but be a lousy Health system.

Because we are dealing with people (refer back to my definition) not dollars, ROI cannot be applied successfully as a true measure of success. One exception to this is the case study on where the organization actually started to sell knowledge

I suggested that we needed some measure of Organisational Gain for Effort.

But there are other problems too.

a)KM will contribute positively to a solution / situation where the Dollar Gain is considerably downstream. Isolating the KM contribution is quite difficult.

b)“the Big Bang” theory. Often KM will pay off in one particular instance, the financial outcomes of which are so great that it dwarfs the cost of KM in the past and well into the future (“we won that big contract” etc)

#actKM11 Ian Fry measuring #KM outputs able 2be done relatively objectively- assessing intangible outcomes is hard but where value lies!

Exams and Measures

How do we know we have transferred the knowledge?

Has anybody set a Knowledge Exam?

My point is that every day, thousands of primary school teachers transfer knowledge to pupils who cannot even read and write. And at that level, they are tested.

Discussion somewhat kicked off at this stage where I was challenged that all teachers did was to teach students how to do tests, and the current Australian system of standardized testing was questionable to some people.

I re-iterated that for most of us, exams were an accepted method of testing transfer of knowledge.

Within the TAFE (Technical and Further Education) sector courses are described as enabling the student with certain “Competencies” and there is emphasis on Competency Based Learning. I asked the conference to consider Knowledge Competencies within a job

Do you know where to go to find out about….

Who would you ask for….

Later in the conference this seemed to re-appear as Capability, but within the KM team; not within the employees in the normal roles.

As a replacement for ROI as an assessment of KM, I suggested that we needed to have KPIs at the Board room and Management Team level.

Sveiby 1997, and Lambe 2007 both proposed KPIs around KM.

When asked, out of 50+ attendees, only one person had used some KPIs (Sveiby) on one project, and in the past.

To my mind, it rather proved John Maloney’s point.

At this point the Tweets and Microblogs took off which prompted Arthur Shelley, Lecturer in KM to Post Graduates at RMITUniversity, to blog on how uncomfortable KM people get whenever they are challenged…

Some other tweets follow:

#actKM11 knowledge and capability development : intertwined in getting the job done & applying the knowledge to get an outcome

@hamishcurry so often we look down at a problem when we should be looking up #actKM11

Assessment

When we are about to cross the road – we assess the speed of an oncoming car; we do not measure it. And that is ample and sufficient to make a decision whether to cross or not.

So too – it is quite OK to assess the impact of KM.

But complex interaction can actually be measured. Arthur Shelley has developed a system which actually measures the contribution of each team member to wiki construction for his classes.

One trick I use is to create an IT application just to carry the metrics of the KM exercise. It is something concrete and tangible and a good “target” for discussing success, or otherwise.

We need to be flexible and inventive.

Dr Tim Matthews is Director of Kidney Transplant Australia and back when I worked in Health, Tim had seen a system in London for recording and analyzing post transplant patients. We had to prepare a “business case” to the Government DP Board which at that time was running with the theme that Health was spending too much money, with little or no obvious measurable outcomes. Tim argued this way-

“If you gave me this system, or a Research Assistant, I would take the system”

And kept going until he got to 4.5 Research Assistants – in which case he would take the system, plus .5 staff.

He changed the language, he changed the currency. Into an area where nobody could disagree..

Feedback Loops

I presented the standard Nick Milton chart, and attributed the term “Experience Management” or Reflection.

According to my Kaizen colleagues, Australians are really bad at Reflection.

The point was made from the audience that it is not just Australians.

  • Let me reflect on that! MT @helmitch#actKM11 Aust worst country 4reflection. Why don't we invest the collective time to do this & learn?

#actkm11 David Boud as a researcher into reflection

I then talked about Knowledge Audits

- Do you have the right knowledge still?

- Where will you get the knowledge you now need

The point I made is that we all need to just do more of all of this.

The final challenge: Is your Knowledge Management itself a Learning System
Simple Stuff – Audit Trails

This was a story from the Country Fire Service (CFS) in South Australia.

Gladstone in the Mid North had an explosives plant consisting of a manufacturing plant and storage sheds widely dispersed through bushland. In 2006, there was an explosion and 3 people were killed. The site had unexploded munitions throughout the bush including 1kg blocks of TNT.

First on the scene was the local Gladstone Rural Volunteer Fire Truck. Although well trained, these volunteers had a dynamic of rapid deployment and rapid action; whereas in a HAZMAT situation, the dynamic is to take things as slowly as you can.

As a result of this experience, there was a major revision of training and procedures.

By 2010, when a truck carrying agricultural chemicals was on fire at Bordertown, and the local Volunteer Fire Brigade responded; they were able to sustain a full gas suit attack on the fire and clean up for over 2 days.

The Bordertown success was largely attributed to those changes.A KM success for Lessons Learned.

However, there was no “track changes” or audit trail on the changes. Recently we discovered that a team, with no “corporate memory” of Gladstone, was planning to remove material from the documents; because they did not understand the relevance.

Something as simple as an audit trail is essential

Experiments

One of the points which Patrick Lambe made in his 2007 presentation was that Medicine developed as a profession, largely through experimentation, but we did little experimentation within KM to see what worked best.

I offered that taking an Agile approach, doing KM in very small chunks, allowed you to back out quickly if something wasn’t working.

In general, the KM I do involves very small populations, and it is impossible to get people to “unlearn” something and go back and try it a different way. Furthermore, the practical constraints on my KM are such that a scientific, robust test / experiment approach is simply not feasible.

Error vs Quality

In 1990s the US Medical Service had no wide-scale Breast Screening service, and the reason given was that they did not have enough Radiologists to read the mammograms. Wray Buntine (from SydneyUniversity) and a team from NASA/JPL who were working on Vision Systems developed an AI application to read mammograms. They achieved 99.96% accuracy with the 0.04 being false positives so it was conservative.

The system was not adopted because it was felt that the .04% represented too many women being “unnecessarily upset”

I raised the question of how many women died as a result of that decision.

(For those of you overseas, Mammogram screening is available free of charge to all females and for those in high risk groups, there is an annual call-up)

To my mind, this is an example where error rates and quality got horribly confused.

I have had some success with asking clients for an acceptable error rate up front at the start of the project – and generally find that they are tolerant, or we have an excellent discussion on expectations.

For example, if you use the search engine and it brings back 10 results

-how many can you tolerate being wrong?

-How many can be “motherhood” / “somewhat related”?

I then challenged some perspectives.

As an example, with Sentiment Analysis, you will be told to expect about 70% accuracy “out of the box” and if you get that; it is considered “good”. Who said that was “good”

With Blogs / Wikis etc the common figures for “good” “expected” are

100 in target population

10 lurkers

1 poster / responder

But when I talk to anybody in Marketing

10 in target population

3 prospects

1 client at some stage

Because they target their population, and put effort into converting targets to prospects to clients

We do not seem to invest the effort in either targeting (we broadcast) or converting.

(A subsequent paper on Social Media contrasted adoption rates of sites / tweets and followed up the promotion aspects)

We cannot have KM Governance without KM Quality.

In their new preface Davenport and Prusak highlight that the way forward for KM is to emulate the development of the Quality movement from “nice to have” through methodology to compliance.

KM Focus

This was a story against me.

I had used my prototype Semantic Processing engine to show where a Subject Matter expert had attached inappropriate tags to some content. The argument that Automatic tagging is better is supported by

- the engines can do it just as well as a human

- when > 1 tagger you get consistency

- when you alter the taxonomy, you can re-tag the old stuff

Etc

You may have come across Taxonomy Fairy Tales (Lambe and Moore). Patrick Lambe was originally to attend the conference but could not in the end, but Matthew Moore was there. They state that Automatic Tagging doesn’t work; and clearly I was disagreeing with them.

But what I am doing with automatic tagging is an AI application – I am removing the human from the equation.

Refer back to my KM definition “the movement of practical knowledge between people”

So a KM solution is to have the automatic tagging advise the SME and collaborate so that the SME is corrected and the engine is enhanced by the discussion around the mismatches.

So my message is that you need to understand what your KM is actually doing at every point.

Even I get lost some times.

Conclusion

At that point, and we were now close to time because of the in-built discussion, I asked for ideas to overcome these issues. To a large degree, I am still waiting.

Postscript - Metaknowledge

Throughout the presentation I did not address Knowledge Quality. Clearly the term “metaknowledge” “knowledge about knowledge” can apply; and we commonly have attributes (the Librarian attributes) of

-source

-date created

-etc

even for tacit knowledge.

But there is very little discussion or attention to “knowledge context”. To paraphrase Mark Twain’s comments about the weather

“Everybody talks about it; but nobody does anything about it”

So a group of us in Adelaide are keen to explore additional attributes like

-what is the validation method

-when was it last validated

-when will it be validated next

-etc

-what was the context in which it was created

-what represents safe, or unsafe re-use

(To get an understanding of the background to this, see the case study on

We would like to hear from anybody interested.