Promises and risks of AI in global health: Key learnings from RSTMH Annual Meeting 2025

07 Oct 2025

A photo looking over someone's shoulder who is reading the Annual Meeting 2025 Programme.

At the end of September, RSTMH welcomed over 100 delegates from around the world to discuss and explore the future of AI in global health across two days in central London.  

With over 30 speakers, including keynote talks, case studies and abstract presentations, there was a huge amount of inspiring conversation that continued far beyond the presentations.  

The event provided the unique opportunity to gather healthcare and global health and tropical medicine experts in one room, alongside those working for NGOs and industry specialists in health AI and technology. The discussion was engaging, and we left the meeting feeling inspired and full of new ideas to carry forward.  

The promise and risk of AI in global health 

Many talks highlighted the clear benefits and opportunities for AI to improve how healthcare delivery and research is performed whilst also posing clear challenges and risks, particularly around equity, access and algorithmic bias.  

Professor Rifat Atun, from Harvard University, highlighted how AI could potentially address the growing challenges in healthcare systems, revealing how it is already demonstrating diagnostic accuracy comparable to humans, with hospitals such as Tsinghua University’s AI-augmented facility leading the way. 

However, while uptake is increasing at departmental and institutional levels, AI is not yet being deployed at scale to strengthen whole health systems or improve global readiness, response and resilience to global health crises. Professor Rifat concluded that AI needs to deliver ‘value for money and value for many’. 

Practical applications across healthcare

A photograph of the Annual Meeting taken from the back of the room with many people facing a stage with speakers on.

Across the two days of the conference, examples of how AI is being used for good were brought into the spotlight. It was highlighted how AI is encouraging professionalism, patient care, communication and making healthcare delivery more efficient.   

Already, machine learning is driving advances in diagnostics, surveillance, and triage, including early sepsis prediction (AI models could tell you up to 10 hours in advance if someone is high risk of sepsis), thermal imaging for breast cancer, and low-cost phone-based diagnostics. 

As highlighted by Professor Sanjay Kinra from the London School of Tropical Hygiene and Medicine, AI is increasingly supporting clinicians through ambient scribe technology, chatbots for patient communication, and triage tools in emergency settings.  From diabetic retinopathy screening in Vietnam to malaria forecasting in Africa, examples presented by speakers showed how AI can improve prevention, early diagnosis, and timely response. Dr Bhargavi Rao, from MSF, highlighted the importance of context-driven AI, ensuring solutions arise from field-based questions and respect patient dignity. 

Beyond primary and secondary care settings, R&D provides opportunities for AI applications. AI has shown potential in accelerating clinical trials, identifying new uses for existing drugs, and analysing large biomarker datasets, but the lack of real-world trials, particularly in Low-and-Middle Income Countries (LMICs), remains an area of improvement. 

However, various case studies warned that even the best technologies are only impactful if treatment, intervention or prevention are accessible and affordable.  

Ethics, equity and inclusion must remain central 

AI is not a single tool but a broad, diverse field with different use cases and ethical implications. 

During a panel session on ethics, equity and inclusion, delegates highlighted the risk of repeating Western-centric ethical frameworks around the world, instead urging for context-sensitive approaches that protect dignity, trust, and human choice. 

As Dr Peiling Yap, Chief scientist at HealthAI, put it: “AI can be the greatest equaliser, but it can also be the greatest divider of our times.” 

Discussions also acknowledged AI’s environmental impact, stressing the importance of balancing innovation with climate and resource considerations. 

How is AI affecting universities, research institutions and scientific publishing? 

Universities are integrating AI across research, teaching, and administration, with opportunities for personalised learning and more efficient data analysis. 

Yet the key question remains – is AI enabling better research that leads to improved health outcomes, or simply more research? The panel agreed that most importantly, critical thinking, judgement, and human oversight are essential skills that cannot be replaced by AI. 

Julia McDonnell, Director of Journals product at Oxford University Press, highlighted how AI tools are increasingly used in academic publishing and peer review, raising both opportunities and ethical challenges around originality, critical thought, and accountability. 

The rise of ‘agentic AI’, systems able to act on behalf of users, signals a new frontier for scholarly communication. “We’re seeing consistently that people are using AI, and that’s not going to change. There is a reality that this technology is here to stay, but what the future looks like is up for debate,” she added.  

Looking Ahead: Education, Partnerships and Implementation 

Panel on the Impact of AI on Universities and Research Groups. From left to right: Professor John Gyapong, Professor Olaoluwa Pheabian Akinwale, Professor Liam Smeeth, Dr Buddha Basnyat and Dr Sarah Rafferty.

The RSTMH Annual Meeting helped to outline the current landscape of global health and AI and gave everyone the opportunity to imagine what might happen next.  

Experts suggested training for healthcare professionals in AI literacy as a help bridge conversations between clinicians and computer scientists. Strategic, ethical partnerships were highlighted as a necessity to help implement AI responsibly, particularly in low-resource settings, where the risks of inequity are greatest. 

Over the two days, it was repeatedly brought to our attention that AI must be implemented while aligning with humanitarian principles, and be used to reverse, not compound, the widening digital divide. It was agreed that evidence and ethics must be built into the development of practice around AI and global health, and as one panel member put it, ‘making sure we are shaping the narrative, not being shaped by it’.  

A graded approach to implementation, grounded in evidence and tailored to local contexts, is essential to avoid harm in vulnerable communities. 

The role of RSTMH  

The final panel of the conference explored the role RSTMH could play in optimising the use of AI across global health.  

The panel and audience highlighted that one key area of focus for the Society would be to engage in education and delivering guidance around AI in healthcare so that more people working in the field are familiar with the latest developments. 

Another theme that came out of this session was the idea of collaboration. It was suggested that RSTMH members should be involved in discussions with tech companies to ensure the right input, at the right time. There is also an opportunity to continue engaging different groups – from researchers and clinicians to charities and industry.  

To ensure this conversation is not just a temporary discussion but an authentic, ongoing area of focus for RSTMH, we took the annual meeting as an opportunity to launch our new Special Collection: Artificial Intelligence in Global Health, featuring papers from around the world exploring AI and healthcare. 

We also announced the exciting news that we have agreed terms with Oxford University Press to launch a new journal on the topic of Artificial Intelligence in global health, which we plan to launch in early 2026!  

We have come away from the conference bubbling with ideas, energised from hearing different perspectives and inspired by making new connections.