The recent Alzheimer’s Research UK Conference in Liverpool hosted a debate on whether integrating AI into dementia research will significantly improve patient outcomes and benefits will outweigh potential risks. It was a fascinating debate, argued well by both sides. Payam Barnaghi and Timothy Rittman argued the affirmative case, saying that AI did have some issues but was coming regardless, so we had best be ready for what would ultimately be helpful to research in its power to simulate scenarios, analyse complex data, and crucially, do these better than humans today. Raquel Iniesta and David Llewellyn countered, arguing that AI disempowers clinicians, dehumanises patients, and will entrench inequality in systems owned by large private corporations.
The room swung from 84% in favour to 65% against the use of AI in dementia research – quite a turn around. But one limitation of the debate was how focused it became on the use case of AI in patient-clinician interactions and dementia diagnosis in particular, and the risks that biased data presents in this case. While an important discussion, the opportunities and dangers of AI go far beyond this.
Therefore, what I want to try and do in this blog series is outline a diverse series of creative uses of AI that could substantially improve our understanding of dementia, and patient outcomes. I am not blindly optimistic about technology myself and am rather concerned about the short and long term risks these technologies pose. However, AI presents so many clear opportunities that would benefit dementia research, that I believe it implores us to invest in overcoming those risks, rather than abandoning hope for the technology all together. So, consider this the “Huge If True” analysis of the case: If we could be confident in using AI, just how should we be using it, and how useful would it be?
Opportunities in Diagnosis
I’ve already written one blog about how individuals might employ AI language models for personal use in the day-to-day work of a researcher, and I will write a further blog about use of AI in fundamental dementia research and drug discovery. Here then, I am going to discuss the case for use of AI in dementia diagnosis and care. Timothy Rittman made the point that AI is strongest when we have multiple pieces and multiple kinds of data that we ourselves don’t fully know how to piece together. This is the precisely the case for dementia diagnosis. For a given patient, we likely have at least data from batteries of various standardised cognitive tests. This alone is highly complex data, with multiple domains of cognition being tested, and various tests, though these are well known to be relatively blunt and imperfect tools subject to daily fluctuations.
Add to this self-reported and family/carer reported changes in behaviour and symptoms, and possibly digital biomarkers from devices like a smart watch, such as changes in sleep, heart rate, gait, step count, activity levels, etc. Now add in fluid biomarkers like CSF or blood, neuroimaging like MRI and PET scans. Now put this all in the context of an individual patient, their genetic risk, their clinical history, comorbidities, their lifetime exposures, and mitigating behaviours like diet, exercise, sociality, education levels, medications for other diseases, etc. This is a huge amount of information to be patched together, coming in natural language, imaging, numerological, and highly data dense formats. For a given patient, it’s also more likely that not all of these will be available – a different subset, a different set of tests and data available to make the decision for each person. Which of these pieces of data is the most important? Which missing test would provide the most added confidence to a diagnosis? Already, we are starting to see the opportunity for a system that could interpret multiple complex data formats, considering vast swathes of information, more than a human can analyse in any reasonable amount of time, to make predictions.
But now also put this in the context of dementia itself – a condition driven by a plethora of different diseases with myriad subtypes and atypical variants. Sure, we know Alzheimer’s disease is the most common cause of dementia. But our growing understanding of the heterogeneity of these diseases means that a diagnosis of simply “Alzheimer’s disease”, defined under the ATN framework as amyloid, tau, and neurodegeneration, is less and less meaningful. Fewer than 1 in 10 patients seem to have pure ATN “Alzheimer’s disease” pathology, with many presenting with copathologies like cerebral amyloid angiopathy, Lewy bodies and other alpha-synucleinopathies, and/or TDP-43 pathology such as LATE (limbic-predominant age-related TDP-43 encephalopathy). Beyond these proteinaceous inclusions, other lesions such as atherosclerosis, hippocampal sclerosis, small vessel disease and cerebral microhaemorrhages, iron accumulation, and more besides, are also relatively common.
Not only are there different mixes of pathologies across patients that inform their dementia profile, but these also may manifest in different parts of the brain. Consider atypical variants of Alzheimer’s disease, such as the hippocampal sparing variant, where, as the name suggests, the canonical epicentre of the disease remains intact, and other parts of the brain are instead severely affected by disease, leading to a quite different clinical presentation. Consider also conditions like Posterior Cortical Atrophy, which features Alzheimer’s disease ATN pathology, but primarily affects the occipital lobe, at the back of the head, which is normally relatively spared by Alzheimer’s disease.
The complexity posed by mixed pathologies in dementia and the vast heterogeneity of patients informs not just their clinical presentation, but also their prognosis and treatment plan, in ways we are only beginning to understand. What kind of progression can patients expect from their disease following diagnosis? Which of the existing and emerging treatments might be best suited to their condition?
The core strength of AI is in interpreting this complex data to make these predictions. It can examine larger datasets than are physically possible for a human, identify subtle changes that occur only under certain circumstances which may otherwise be missed, and find patterns that we never could alone. To be useful here, AI doesn’t have to be perfect, it just has to be better than human clinicians. To be clear, we are presently not very good at accurately diagnosing what broad disease may be driving a given case of dementia, even by our currently relatively blunt standards. One 2019 study found at least 1/3rd of patients had a different clinical diagnosis compared to neuropathological confirmation. This is likely an undercount given our expanding knowledge of copathologies that may inform presentation, prognosis, and treatment of dementia, for which our current diagnostic capacity is deeply lacking.
AI will not only likely be better at integrating all of this data than clinicians in terms of accuracy, but it would also be much faster, freeing up time for clinicians to be more patient facing. It would also be much better at keeping up to date with new information as we gain more data about how dementia presentation may differ in the context of different combinations of pathologies, and how dementia may look different in different populations and places. This would likely help to make medicine more personalised, especially as datasets become more diverse, and may mean we can give more targeted advice to patients not just on what drugs would be more efficacious, but on what lifestyle changes could be most impactful too.
Opportunities for Early Diagnosis
Beyond diagnosis accuracy and speed, AI will also likely enable earlier diagnosis, and much better disease monitoring. More and more digital and other biomarkers are emerging that seem to be predictive of developing different neurodegenerative conditions. These include a wide variety of measures such as tracking movement patterns like general activity levels, gait, or screen clicking accuracy, health monitoring such as sleep, heart rate, exercise, weight, and similar metrics, and even speech data such as the sound of your voice, the vocabulary you use, and changes therein.
Many of these biomarkers would amass huge amounts of data, impossible to comb through for an individual clinician. Most of these would also likely involve changes too subtle to be picked up individually, or too noisy to spot amid day-to-day changes. Moreover, sadly, we simply do not have the clinical and GP infrastructure to pay patients enough attention to spot these changes early enough. However, AI could be used to analyse this longitudinal data, triangulate across multiple different measures, pick up these small developments that would get missed in a 15-minute GP visit, and perhaps provide an alert to individuals or clinicians that they may be at higher risk for a particular disease. It may even predict, based on what features are changing, what diseases are most likely to be driving them and what tests would get to the most accurate diagnosis the fastest. This would not only save time and money but enable us to reach that holy grail in dementia research that is early diagnosis. This is likely essential for many of our current therapeutic options to be truly efficacious and would be invaluable to our understanding of early and currently relatively inaccessible stages of disease.
Opportunities for Clinical Trials
Outside of the effect early, accurate, and fast diagnoses for patients would have for both patients and the care system alike, this would also transform the way clinical trials operate in dementia. The current lack of patient stratification in trials means that the same drug using the same mechanism is tested across a wide array of patients. These patients may or may not actually have the disease the drug is designed against, may have copathologies that hinder, counteract, or mask the effect of the drug, or may be from a population whose background genetics means that at best they don’t respond to the drug well, and at worst means they have an adverse reaction to it.
It is entirely plausible that some or many of the treatments that have or will be tested in clinical trials have been ruled out as not having an effect across a big group of people, but would have been successful in a much better targeted trial. Matching patterns of drug mechanisms, efficacies, and reactions to patients, their symptoms, and their underlying diseases, is a potentially powerful use of AI that could enable more targeted clinical trials with far fewer patients, which would therefore be much faster and cheaper to run. Not only would this lower risk for pharmaceutical companies, enabling more drugs to be brought to trial, but it would, ideally, improve our success rate.
Overcoming Risks – Entrenched Bias
The opportunity for AI in dementia diagnosis, prevention, and treatment, is clearly huge. But, so are the risks. The most obvious of these is the imperfect data that the AI models are trained on. Most data that would inform AI algorithms would be based off white, Anglo/European populations, likely meaning less accurate outcomes for understudied groups like people of African and Asian descent. Human clinicians are of course, also, in some sense, trained on this data. The biases in AI simply reflect the biases in our society. Biased data in, inequitable answers out. The fear, though, is that institutionalising these biased algorithms, i.e. algorithms that work better for some populations than others, would entrench this bias, making it harder to ameliorate, and widening inequality gaps rather than bridging them.
I would actually argue the opposite case here, though: that AI algorithms are actually probably easier to retrain than human clinicians in the face of new data. As efforts to gather more inclusive data and target underserved groups gather more steam, these gaps will, in fact, close faster than if we were relying on human clinicians and human driven healthcare systems to change as our understanding of disease in different populations changes. The imperative of our current biased datasets is not to completely stop use of AI, but to go out and acquire better, more representative data. In the meantime, having a sense of the relative accuracy and confidence of AI-based predictions will be essential, so that this bias can be tracked and taken into account when making decisions, and this is readily doable.
Another inequality driven concern about the use of AI is that it’s roll out will be biased towards populations who are financially rich and data rich. AI will work far better for dementia patients in wealthy countries with good healthcare records, where people have wearable devices tracking heath data, and where neuroimaging scans and fluid biomarkers are more geographically and financially accessible. Unfortunately, this is going to be true of any dementia intervention. All new drugs are largely tested in affluent countries that have the necessary clinical infrastructure, and so far, many forms of dementia care have been so expensive as to be only accessible to the relatively rich.
All of this is to say that this is a real problem of global inequality, and a real problem facing dementia research and care – but not one that has stopped us trying to make progress in diagnosis and treatments so far. Even though they remain extremely expensive and hard to implement and will only be able to serve a more limited population at first, before moving sufficiently down cost curves to be more widespread, the field continues to invest time and resources into technologies like monoclonal antibody therapies, gene therapy, PET and other neuroimaging biomarkers, and other such technologies. Even blood based biomarkers, arguably the most accessible major advance in dementia research, still faces barriers, such as access to diagnostic facilities, especially in regional and remote areas, non-representational datasets, and educational inequality. Again, the imperative of inequality is not to deny the advent of fresh opportunities and progress for dementia research, but to try and address the underlying inequality, so that these advances reach more people more quickly.
Overcoming Risks – Disempowerment and Dehumanisation
If anything, there is a major opportunity for AI to help make dementia care much more efficient. By getting to diagnoses faster, earlier, and more cheaply, clinicians would have more time and resources to focus on patient needs and interaction, we could cut down on the number of tests patients need before a definitive diagnosis can be reached, and patients would be more accurately referred to the appropriate specialist sooner, rather than bouncing around an already overstretched hospital and healthcare system. This addresses another concern around AI, that it would incapacitate clinicians, making them reliant on an AI black box for their patient care. Some argue, including in the aforementioned debate, that only human clinicians can fully understand and account for the subjective experience of the patient, that to entirely rely on AI for diagnosis and care purposes would disempower clinicians, and that this would dehumanise patients.
This supposes an AI disruption of a current rosy reality of dementia care, where we have enough clinicians to deeply attend to the growing number of dementia patients and their individual circumstances and needs, fully understand their clinical backgrounds, histories, and symptoms, and provide ongoing care. This is, of course, a fantasy – not out of lack of effort on the part of clinicians of course. But healthcare systems, even in wealthier countries, are extraordinarily overstretched, without enough care workers to adequately attend to aging populations with increasing dementia incidence, and with diminishing time per patient. We have systems that predominantly already only provide excellent care to relatively wealthy people living in relatively wealthy areas in relatively wealthy countries. The rest remain on waiting lists, remain bouncing through the system from appointment to appointment, or remain undiagnosed.
An Alzheimer’s Society examination of the issue found that GPs, the first link in the dementia diagnosis chain, often lack sufficient training, empathy, and time in 10-minute appointments. It also found that diagnosis took a long time (years, even), was often inconclusive, and that there was a geographic lottery when it came to accessing specialised services.
Instead of disempowering, AI could empower clinicians to better serve their patients, and even expand access to underserved populations. Faster and more accurate diagnoses would mean less time wasted on unnecessary appointments guessing at underlying diseases, getting answers and appropriate care to patients faster. Humans would need to, of course, stay in the loop, just as humans currently work alongside all kinds of other diagnostic and monitoring technology that has been introduced into healthcare settings over the years. However, the argument that AI dehumanises patients does not stack up against the reality of a system that already underserves them and cannot take into account their full being. In-clinic tests are infrequent and can be hugely flawed measures of cognitive and physical abilities, usually on inappropriate ordinal scales. AI can integrate clinical histories with genetic, symptomatic, biomarker, and all kinds of other information about a patient. All of these are highly imperfect measures on their own, but together, in a way that a human would never be physically capable of, they enable us, or rather, they enable AI to help us see the full extent of a person in their individuality far more rapidly and precisely than our current short appointments can afford.
Overcoming Risks – Understanding AI Decision Making
The ‘black box’ issue of AI – that we don’t know how it arrives at the answers it does, and therefore can’t necessarily easily error check it, remains significant – but not insurmountable. Research into ‘explainable AI’ is ongoing and progressing, and there are ways of examining what an AI is ‘paying attention to’ in interpreting data, or ways of selectively excluding parts of data from its analysis to see which parameters were most important. Moreover, we already routinely accept the use of technology we don’t fully understand, as long as it provides a clear benefit. Few understand the physics of an MRI machine, the communications technology of GPS, or for that matter, the mechanics of a washing machine. Nevertheless, we continue to use these devices because they reliably provide actionable information and a benefit to us. The above concerns about AI therefore are not so much arguments for its abandonment, but for ensuring its implementation is appropriately regulated, with sufficient training, ongoing safety and outcome monitoring, and thought through use cases.
Some so called AI ‘accelerationists’ argue that all technology has served to benefit and progress society, all progress has been technologically driven, and over regulation of technology merely serves to delay the advantages of the high tech future. This is clearly overly zealous, however. New technologies, especially in the healthcare space, need to be implemented with significant public licence and patient buy-in. This takes trust, which takes time, and can be easily broken if a laxly regulated algorithm causes harm and a scandal results, setting back implementation of the technology, in a way that might be analogised to nuclear power. AI roll out will have to be gradual and heavily regulated, keeping humans in the loop, especially while we don’t fully understand the reasoning behind AI-driven predictions and decisions, and while error rates remain unknown.
Overcoming Risks – The Financial and Environmental Costs
AI, of course, does not come cheap. The term “cloud computing” disguises the reality that these computational systems require massive amounts of physical infrastructure in datacentres, sophisticated computer chips and fabrication processes, and complicated global supply chains. They also need massive amounts of training data, all of which is both monetarily and environmentally expensive. Datacentres and computer chip fabrication are energy and water intensive, and the chips themselves are needed at such scale and expense that, for the most part, only extremely large corporations, the likes of Meta, Google, and Microsoft, are among the few with the available capital to make and operate AI models at scale.
This is not set in stone, however. Financially speaking, the huge demand for chips is already leading to more competition in the market for advanced chip manufacture, which will make the technology more accessible. AI algorithms are also getting more efficient, and likely to get significantly more so, resulting in less demand on hardware and energy for the same amount of computation. Shifting to more renewable energy-based grids can also help to mitigate climate concerns. Privacy remains a worry for algorithms owned and operated by large corporations who predominantly profit off collecting data to sell advertisements, among other things. This, however, is simply an argument for more government funded, publicly owned and operated AI systems, which I personally believe is not only necessary but increasingly achievable with more efficient and open source AI models, and even ‘light weight’ models that require drastically less compute power but still function extremely highly.
Conclusions
Ultimately, newer technologies always bring trade-offs. The known and unknown risks of harm against the certain and potential benefits. To paraphrase Siddharthan Chandran, CEO of the UK DRI, and also a speaker at the ARUK conference, we are currently trying to understand one of the most complicated set of diseases known to humanity, which affect the most complicated structure in the known universe, using some of the bluntest tools of any medical discipline. We desperately need better tools. The risks of implementing AI as a tool here are most pronounced because so far, we have been considering the clinical and patient facing side of dementia research. However, there are many more opportunities for AI to be integrated into dementia research at the fundamental science level, which could have extraordinary benefit in enhancing research with drastically less risk. This will be a topic for a subsequent blog, however.
For now, it remains important that the opportunities of AI are not dismissed – just as it remains important that the risks it poses are not either. The wild swing of the vote in the ARUK conference debate suggests that many people have not deeply considered these issues before and have not made up their mind. This is a crucial time, therefore, while these technologies and their implementation is still nascent, to carefully consider where, how, and whether we should be using AI in patient facing applications. In the end, AI poses a question: Are we willing to put the work in to overcome the dangers it poses, to achieve the huge benefits? Are we happy to accept the inequality and inaccessibility of our current systems, or are we capable of overcoming these? Do we think the current status of dementia care and accuracy of diagnosis is sufficient, or can AI help these to be improved? Do we want to rely on AI run by large private corporations, or would we rather have systems owned by, operated by, and accountable to the public? Should our fears of a dystopian AI future dictate our actions to outlaw the technology from healthcare, or should our aspiration towards a future of healthcare that benefits from AI enable our actions to overcome the fundamental inequalities and technical challenges that drive its risk.
These questions are live, and their answers can be shaped through paying attention, public discourse, government and institutional regulation, and establishing new norms around the safe and considered implementation of AI. I’m glad organisations like ARUK are hosting public discussions around these topics, and I hope people continue to engage in active debate about this. I personally think that these technologies are inevitable. They already exist, they’re already being implemented, and they already surround and imbue many of the technologies we use in healthcare today. Moreover, the current state of dementia care and diagnosis is so poor compared to what it could be, and the opportunity afforded by AI so great, that the moral imperative seems to be in favour of its use. We shouldn’t be naïve about the risks of AI – but they shouldn’t mean we dismiss the potential for the technology all together. The moral imperative is also, therefore, on us to make sure it gets implemented using data, in a manner, and into a world that is safer, more equitable, more responsible, and better regulated, than present. The uses and benefits for AI in dementia research could be “Huge if True” – we just have to make them so.
This blog largely focused on the risks and opportunities of AI in clinical, patient facing dementia research and care, as was largely discussed in the ARUK debate. However, what hasn’t really been addressed is the use of AI for fundamental research – also a crucial part of the equation. You’ll be able to find a second blog on that topic, covering everything from data analysis and drug discovery to enhanced collaborations shortly! In the meantime, if you’re interested, you can also see my reflections on AI for personal use and day-to-day research activities here.
PS: Huge If True is a YouTube channel run by independent science and tech journalist Cleo Abram. I have no affiliation with the channel, other than being inspired by the optimistic attitude towards science and technology that the channel takes, exploring what technological innovations and advances can do for us if we get everything right, and figuring out what we have to do in order to actually do this.
Author
Ajantha Abey is a PhD student in the Kavli Institute at University of Oxford. He is interested in the cellular mechanisms of Alzheimer’s, Parkinson’s, and other diseases of the ageing brain. Previously, having previoulsy explored neuropathology in dogs with dementia and potential stem cell replacement therapies. He now uses induced pluripotent stem cell derived neurons to try and model selective neuronal vulnerability: the phenomenon where some cells die but others remain resilient to neurodegenerative diseases.