other

Vision: AI personal health assistants

A journalist recently asked me how AI could make a radical impact on health if it was very good, much better than today (he talked about human-equivalent or super-human AI; I wont because I struggle to define what these mean).

I told him that I thought AI personal health assistants could make a huge difference.  I’m thinking of the kind of thing I see in many science fiction stories, where AI assistants (often running on hardware embedded in the user’s body) help the user do all sorts of things.  From a health perspective, the assistants usually have access to medical records and current medical sensor data, and sometimes can do interventions (eg, release drugs into the bloodstream).  They make suggestions, respond to questions and requests, and communicate with clinicians.

Using AI to support patients

I am a strong believer in using AI to support patients. High-level analyses of how to improve health and healthcare systems usually talk about things like focusing on prevention, moving care out of hospitals, reaching deprived/marginalised communities, and encouraging patients to look after themselves better (blog).  Doing these would improve health outcomes and also reduce the cost of healthcare; both of these are hugely important in aging societies suffering from increasing amounts of chronic illness.

My group is involved in several projects which use AI to support patients (recent talk), in areas such as:

  • Encouraging and supporting patients to live more healthy lives (eg, Balloccu et al 2024).
  • Helping patients manage chronic conditions while living at home (eg, ASICA project)
  • Supporting informed decision making (eg, Sivaprasad and Reiter 2024).
  • Helping patients understand medical data, and helping doctors understand what patients are saying (eg, Sun et al 2024).

Doing these things well would support the above goals. For example an AI app that helped people eat more sensibly and reduced obesity would:

  • Prevent many cases of type 2 diabetes.
  • Reduce load on hospitals and (if integrated into the healthcare system) help community-based GPs and health workers support people
  • Help people in deprived communities (if it was accessible to and usable by them!) who have limited access to hospitals and doctors
  • Encourage people to look after themselves and “take ownership” of their health.

Of course there are a huge number of challenges in building effective AI personal health assistants.  I talked about some of them in the above-mentioned talk, including

  • Patient diversity: People are of course hugely diverse, and an effective AI assistant must act appropriately for a mother who is stressed by a difficult toddler and struggling to cope, an adult who has no understanding of numbers and probabilities, a user who does not trust the AI assistant (sometimes for good reasons), etc; these are real examples from our projects.
  • Long-term usage: To really make a difference, an AI personal health assistant must be used for years and indeed decades.  Unfortunately, digital interventions (including AI) almost always lose people over time; keeping people engaged and using a system for ten years is a huge challenge.
  • Data:  High-quality large medical data sets are very hard to get.  Of course there are huge privacy and data protection issues, but also we often see data quality issues (data is missing, incomplete, or wrong), unrepresentative patients (including poor coverage of people from deprived communities), and inconsistencies between data provided from different sources.
  • LLMs: LLMs are an amazing technology which would really help AI personal health assistants, but (at least in our experience) they usually have unacceptable behaviours including saying things that are not true or not appropriate.
  • Evaluation: Genuine evaluation of impact and usefulness is very hard.  Test set performance is of limited value (and will not convince patients, doctors, or regulators to use the tech), we need RCTs which monitor large numbers of people for many years.  Also we need to evaluate “soft” outcomes like quality of life, as well as well-defined outcomes such as mortality.
  • Deployment: It is very hard to deploy AI on a large scale in healthcare.  For example we have had models that outperform average doctors on some types of diagnoses since the *1950s* (blog), but usage of AI models for decision making is very limited.  Part of the problem is lack of confidence and trust, especially if the only evaluation is performance on a test set.  But also, many doctors are not enthusiastic in general. As I say in my book:

doctors will probably be receptive to ‘We will use AI to automate paperwork so that you have more time for careful decision-making’. However, they may not be receptive to ‘We will use AI to automate decision-making so that you have more time to carefully complete your paperwork’.

Advanced AI personal health assistants

Going back to the question posed by the journalist, I think advanced AI in a personal health assistant could address the above problems, and “unleash” health assistants so that they could make a big difference to health.

  • Patient diversity: An advanced AI that knows a lot about the patient could adapt its interaction for stressed mothers, people who struggle with numbers, etc.
  • Long-term usage: If advanced AI assistants (with all sorts of functionality, not just health) become as ubiquitous as smartphones, then people will heavily use them over long periods of time.
  • Data: An advanced AI with access to sensors and medical data could contribute to high-quality data sets.
  • LLMs: Advanced AI systems hopefully will not hallucinate or say inappropriate things.
  • Evaluation: Will still be challenging, but an advanced AI assistant could gather data to support clinical trials.
  • Deployment Will also still be challenging, but will be easier if society in general accepts and trusts advanced AI.

Of course there are also a huge number of non-AI challenges that need to be addressed!!  These include security of highly sensitive personal data, integration into healthcare workflows and pathways (AI assistants should work with clinicians, not replace them), usability for all sorts of people (including elderly people with diminished capabilities), and regulatory approval.  Trust is paramount and may be difficult for AI assistants produced by companies who have acted unethically in the past.

So getting AI personal health assistants to achieve their potential will not be easy, to put it mildly.  But if it could be done, their impact on health would be enormous and transformative.

Final thoughts

The journalist who asked me about using advanced AI in healthcare seemed surprised that I talked about AI personal health assistants instead of improved clinical decision support (helping doctors make better decisions).  The reason is that I think the personal health assistant would have much more impact on health.  Better clinical decision making is of course very desirable, but I don’t think it will transform health or healthcare, and I suspect it will only have limited impact on the key health challenges described above (eg, focusing on prevention instead of treatment).  Whereas I believe that AI personal health assistants could address these challenges and radically improve health (including in deprived communities), and also reshape healthcare systems so that they can support an increasingly elderly population.

At a personal level, I am planning to retire within the next few years.  One thing that could convince me to postpone retirement would be if I saw an exciting opportunity to work on personal AI healthcare assistants (either in university or commercially), this is something I really believe in.

3 thoughts on “Vision: AI personal health assistants

Leave a comment