A few weeks ago I wrote a blog on Amateurs focus on models; professionals focus on data. I realised afterwards that change management can be even more important than data in building real-world AI systems, so thought I would discuss this here. I focus on AI products used by companies or organisations, not on consumer products.
Change management in this context is about getting organisations and individuals within organisations to use a new tool. There is a long history of AI tools which were effective and useful, but did not get seriously used; or were used at one site for a few years and then faded, instead of spreading to the rest of the world. This is usually because people or organisations resist new tools and processes.
For example, in early January there was a news item about an AI system from Google which was more accurate than doctors in diagnosing breast cancer from mammograms. This is a great technical and intellectual achievement! However getting the health service to use such tech is an enormous challenge. I know this because I have been hearing about AI systems out-performing human doctors in diagnosis since I started my PhD in the mid 1980s, and none of these systems ever entered regular use.
Indeed, Paul Meehl’s 1954 book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence showed many cases where statistical techniques (algorithms in 2020 terminology) do better than people. Daniel Kahneman discussed this in his book Thinking Fast and Slow (chaps 21 and 22), and says that across 200 studies (mostly done after Meehl’s book was published), statistical algorithms were superior in 60% of cases. Kahneman relates this to deficiencies in how people reason about numbers and probabilities (which is his speciality); if you consider the known “bugs” in human reasoning, its not surprising that algorithms can beat humans in many circumstances. But once again, real-world usage of these simple (by modern standards) statistical algorithms remains very limited; Kahneman discusses this in a section called “The Hostility to Algorithms”.
So why dont people use AI systems which are shown to be effective? Well, here we get into murky waters of people’s motivations and organisational processes. From a people perspective, its hardly surprising if people resist using AI tools if they fear that they will lose their jobs, or that their jobs will be “deskilled” (loan officers at banks once made important decisions, but now they mainly do data entry for algorithms). Management may claim that this is not the intention, but a lot of employees do not believe or trust their managers.
People also don’t like being “shown up” by an AI system. I remember one AI patient-information system which worked but which annoyed some doctors because it sometimes made patients aware of doctors’ mistakes (mistakes due to carelessness or sloppiness, not due to making the wrong call in a difficult decision). Revealing mistakes probably was good from the perspective of medical outcomes, but a lot of doctors and medical organisations hate admitting mistakes (see the book Black Box Thinking).
And even if people are not worried about their job suffering or being embarrassed, change may simply be too much hassle, especially if the benefits are limited. For example, I know of another system which helped GPs diagnose and treat patients with a certain illness. The system worked and made a difference, but didn’t radically change outcomes. It was also required training, and only helped a small number of patients. So GPs didn’t use it because it wasn’t worth their while to spend a lot of time learning how to use a tool which only helped a handful of their patients, especially if the impact on patients was incremental rather than life-changing.
Such behaviour is not restricted to medicine, by the way, I’ve seen it in lots of other industries as well! In fact many years ago I worked with someone from the oil industry who had great examples and “war stories” of this kind of thing in the oil industry.
So if we want people to use AI in their jobs (again the consumer context is different), we will have a lot more success if people don’t see the AI as being a threat to their job or reputation, or more hassle than its worth.
What do we do about this?
There is a lot written about change management in the business and IT systems literature. Much of it is about people issues, and includes advice such as getting solid support of top management, understanding people’s fears, recruit champions). I wont try to expand on this here.
In an AI context, the task and use case are also important. Many years ago, Rob Milne (who was a pioneer of commercial AI in Scotland, he has now sadly passed away) told me that his strategy was to focus on applications which were peripheral, and where full automation is possible. By peripheral, Rob meant tasks that people regarded as annoying secondary things they needed to do, not tasks which were core to their professional identity. For example, in medicine diagnosis is a core task, and writing reports is a peripheral task. So if we try to automate diagnosis, this could be seen as job/deskilling threat since it is the core of what doctors do. Also many doctors enjoy diagnosis. Whereas if we automate report writing, this is not seen as a job threat, and also most doctors do not enjoy report writing and would welcome help here so that they have more time to spend on interesting “core” tasks such as diagnosis.
Rob’s second strategy was to focus on areas where 100% automation was possible. If the user has to interact with the AI system or vet/approve its output, then this requires training, new work flows, and in general is likely to lead to hassle. Hassle is minimised if the AI system does its thing with minimal human involvement.
When I was a postdoc in the early 1990s, I worked with someone from Accenture, who told me that developing medical AI technology was the easy bit, what was hard was getting this technology deployed. This was true in the 1950s when Meehl wrote his book, it was true in the 1990s, and it is still true in 2020. So professionals who want to see AI used for real (in all contexts, not just medicine) need to think about change management.