AI in Healthcare

Patients want to know what information an AI model considers

I am excited that my PhD student Adarsa Sivaprasad won an award for a poster she presented at Healtac 2025 about her PhD research. I think this is really nice work, and will describe some of it below. For more information, see Adarsa’s Arxiv paper, which will be presented at an AI and Healthcare conference in September.

Adarsa is working on explainable artificial intelligence (XAI), focusing on tools that help end users (not model developers). A lot of the work I see on explaining AI reasoning to end users makes assumptions about what users want to know, and typically assumes that users are largely interested in understanding how a model works, including why it made the prediction it did, and perhaps what changes would alter the model’s outputs (counterfactuals). But is this what real users actually want to have explained?

Adarsa has been trying to understand what users actually want to know, focusing on people using a OPIS, which is a tool that predicts the likelihood of success in IVF; users enter some information about themselves, and the tool tells them how likely they are to have a baby. OPIS is useful both for making decisions about whether to undergo IVF, and also for managing expectations. It is deployed and used by many people considering IVF.

In order to understand what kind of explanations users wanted, Adarsa analysed 4 years of comments from OPIS users (users can leave feedback after they use the tool); she also conducted a survey and semi-structured interviews with people who had experience with IVF. The results are summarised in the above-mentioned paper.

Why are features present or ignored?

Adarsa identified many issues where explanations of OPIS were needed, including unclear terminology and difficult-to-understand probabilities. But to me, the most striking finding was that users wanted explanations of model-building decisions, and in particular decisions about why features were or were not included in the model. In other words, users could see that the model ignored features (such as obesity, endometrosis, PCOS) which they thought were important; they wanted to know whether they could still believe its prediction, since these features had been ignored. They also pointed out that some features made no sense in their circumstances (eg, “How many years have you been trying to conceive?” when the user is a woman in a lesbian relationship).

Feature inclusion/exclusion is a modelling decision, and Adarsa identifies several rationales which could be presented in an explanation:

  • Data shows the feature does not matter: The feature is ignored because the data shows that the feature has minimal impact on the model’s prediction.
  • Domain knowledge shows that the feature does not matter: Scientific evidence shows that the feature does not make a significant difference.
  • Insufficient data: The feature may matter, but the model builders did not have sufficient high quality training data to reliably model the feature’s impact.
  • Domain shift: The world has changed since the model was built. This includes societal changes (eg, legalisation of same-sex marriage) and changes in scientific knowledge and interventions.

This makes perfect sense to me. People are unique, and any model trained on data may not be able to fully take into consideration unusual factors (or combinations of factors) because it will not have sufficient training data. The world also changes, which impacts model validity. So as a user of an AI model, I want to know (A) has the model properly considered my unusual circumstances and (B) is the model out-of-date; these questions are more important to me than details of how the model made its prediction.

This applies to many areas, not just IVF. For example, if I have a credit card application refused, I want to know what information was considered and what was ignored; certainly I have seen such models ignore quite important information for establishing whether I am creditworthy.

Explaining feature inclusion/exclusion

Adarsa is currently working on a dialogue system which can explain the types of questions that users ask, including feature inclusion/exclusion. This is work in progress, but the basic idea is to use a RAG based approach where the RAG document set includes information needed to answer most questions about feature inclusion/exclusion. She should have more information and evaluation of this approach later in the year.

Final thoughts

If we are going to successfully explain AI models to end users, we need to understand what they want to know! Most XAI work to date has focused on supporting model developers and debuggers, who want to understand how a model makes decisions. However, end users may be less interested in reasoning processes than in what information a model considers; if so, XAI tools should be able to answer questions about this.

One thought on “Patients want to know what information an AI model considers

Leave a comment