Can AI improve health for everyone? We want to fund research to find out

Wellcome has commissioned a new report that focuses on the ethical, social and political challenges of using artificial intelligence (AI) in health. And to respond to the issues this raises, we’re launching AI-themed Seed Awards. Dan O’Connor, head of Wellcome’s Humanities and Social Science team, explains.

Green cross with circuitry
The use of AI in health and research raises some huge questions about accountability and social justice.
Credit: Future Advocacy

Artificial intelligence is probably going to change the world, and I for one welcome that. But to make sure these impending changes benefit everyone, we need research into the ethical, social and political impacts the changes are going to have.

Although the term ‘artificial intelligence’ (AI) was coined in 1955, it’s only in the past few years that it has started to become a reality. Massive increases in computing power and the internet’s generation of vast amounts of data have made possible machines that can use reasoning and learning to perform increasingly complex tasks ever more effectively and cheaply.

The dawning reality of AI has generated considerable excitement, but also considerable fear. Will we live in a leisure-filled, labour-free utopia, or be made redundant by our new robot overlords?

How AI is changing health and research

In Wellcome’s areas of interest, health and research, AI is an increasingly important factor. Broadly speaking, there are five ways in which AI is changing health and research:

  1. Process optimisation: using AI to increase the efficacy and efficiency of procurement, logistics and planning within health systems large and small.
  2. Preclinical research: using AI in applications such as drug discovery and genomic science, where big datasets require at-scale analysis.
  3. Clinical pathways: using AI for clinical work such as algorithmic diagnosis, prognosis and screening.
  4. Patient interactions: using AI to interact with patients and other service users, through the delivery of online therapies or information provision
  5. Public health applications: using AI to identify epidemic emergence and monitoring and predicting the spread of disease.

All of these areas are in their infancy, and all carry the promise that AI will be able to do these things better – and more cheaply – than humans alone.

This is potentially very exciting, but it also raises some pretty huge questions about social justice, transparency, accountability, fairness and resource distribution.

  • Who benefits from the cost savings of AI?
  • How are AI decisions made?
  • Who programmed the AI and with what data?
  • If AI makes a mistake, who is responsible?
  • Can we be sure AI is not prejudiced against particular populations
  • Will we have the option not to use AI?
  • Can AI decisions be disputed?

If these questions remain unanswered while AI applications march ever onward, it will only be a matter of time before health and biomedicine have their own ‘Cambridge Analytica Moment’.

Worse still, if these questions are not answered to public satisfaction, the potential benefits of AI run the risk of being lost in another collapse of public trust.

Ten areas in need of new research

Wellcome has partnered with think-tank Future Advocacy to produce a report: Ethical, social and political challenges of artificial intelligence in health [PDF 5.05MB].

The report outlines in detail the major ethical, social and political questions that need answers if the benefits of AI are to be shared as widely and as equitably as possible.

In particular the report identifies ten key areas in pressing need of new research.

  • What effect will AI have on human relationships in health and care?
  • How is the use, storage and sharing of medical data impacted by AI?
  • What are the implications of issues around algorithmic transparency and explainability on health?
  • Will these technologies help eradicate or exacerbate existing health inequalities?
  • What is the difference between an algorithmic decision and a human decision?
  • What do patients and members of the public want from AI and related technologies?
  • How should these technologies be regulated?
  • Just because these technologies could enable access to new information, should we always use it?
  • What makes algorithms, and the entities that create them, trustworthy?
  • What are the implications of collaboration between public and private sector organisations in the development of these tools?

New AI-themed Seed Awards

To help answer these questions, the Humanities and Social Science department at Wellcome is launching a themed Seed Awards call.

We encourage researchers from any discipline (or disciplines) in the humanities or social sciences to apply for funding. In total, there is around £1 million.

  • Applications open on 1 May and the deadline is 26 June.
  • The first awards to successful applicants will be made in September 2018.

If you want to apply, read the report carefully first. We’re looking for answers to – or ways of answering – these issues, not new ways of describing them.

Related links