Researchers in Africa are using AI to fill the global health care gap
From rural Kenya to northern Nigeria, artificial intelligence is turning smartphones into medical laboratories
Originally published on Global Voices
A child being given an injection. Image by Kwameghana via Wikimedia Commons (CC BY-SA 4.0 Deed).
By Chukwudi Anthony Okolue
In 2024, a 28-year-old maize farmer in Siaya County, western Kenya, walked into a small public clinic complaining of a fever. Ten years ago, he would have waited days — sometimes weeks — for a malaria, typhoid, or dengue diagnosis. In 2024, he received an answer in ninety seconds. A community health worker took a photo of a thick blood smear with an ordinary smartphone clipped to a USD 50 portable microscope. An artificial intelligence algorithm analyzed the image and suggested he had “Plasmodium falciparum ++” with 98.5 percent accuracy — better than most non-specialist lab technicians in the country. The farmer walked out with the correct antimalarial drug that same afternoon.
That pilot, run by the Kenyan Ministry of Health with technical support from the startup Ubenytics, is now active in more than 420 facilities across eight counties. Early results of the pilot study published in The Lancet Digital Health in March 2025 show a 31 percent reduction in inappropriate antibiotic prescribing and a 19 percent drop in severe malaria complications in intervention areas.
It is important to clarify terminology. While the term artificial intelligence is commonly used in both academic and popular discourse, the systems discussed in this article are more precisely described as large language models. These models do not exhibit general intelligence; rather, they perform rapid statistical pattern recognition and probabilistic text generation based on vast amounts of training data. Where appropriate, this article uses the term LLMs to reflect this distinction, while acknowledging that AI remains the umbrella term under which such technologies are often categorized.
Kenya is not an outlier
Across West Africa, Ghanaian startup Chestify AI, founded in 2020, is using artificial intelligence algorithms to support clinicians in interpreting chest X-rays and other imaging in under-resourced health centers. They generate visual heat maps and abnormality scores that help flag conditions such as tuberculosis and pneumonia, accelerating diagnosis in places where radiologists are scarce. In deployments across 25 health facilities, Chestify has reported diagnostic turnaround times reduced by about 40 percent, with imaging reports delivered within 3 hours rather than days.
Previous WHO-supervised validation studies of computer-aided detection for tuberculosis using chest radiographs have demonstrated consistently high performance in low-resource settings, with a pooled sensitivity of around 94.7 percent, often matching or exceeding the average diagnostic accuracy available where specialist radiology capacity is limited.
Rwanda’s drone-delivered blood program now uses routing algorithms, reducing the average delivery time from 42 minutes to 18 minutes in hard-to-reach districts.
These are not future promises; they are documented, peer-reviewed deployments happening today.
The numbers behind the urgency are well known but worth repeating: sub-Saharan Africa has 11 percent of the world’s population and 24 percent of the global disease burden, yet only 3 percent of the world’s health workers and less than 1 percent of global health expenditure. The specialist gap is even starker: Nigeria, for example, has roughly one pathologist per 500,000 people, compared with a global average of one per 25,000.
Artificial intelligence will not magically conjure more doctors, but it is already making an impact in areas with underresourced medical systems.
It upgrades the accuracy of non-specialist workers. In Uganda, Makerere University’s AI Health Lab and partners, including the Infectious Diseases Institute and NAAMII, are using AI-guided obstetric ultrasound tools that enable nonspecialists, including community health workers, to capture and interpret basic fetal images.
These programs are allowing healthcare workers to catch diseases earlier, when they are cheaper and easier to treat. In 2019, The Lancet published a clinical validation study of a deep learning model in a retinal screening program in Zambia, which showed excellent and earlier diagnostic performance for referable diabetic retinopathy, vision-threatening diabetic retinopathy, and diabetic macular oedema compared with human graders.
None of this is theoretical. The cost curves are collapsing faster than most policymakers realize. In 2022, training and running a high-performing malaria microscopy LLM cost roughly USD 180,000. By late 2025, the marginal cost per test in large-scale deployments is under USD 0.30 — cheaper than the current rapid diagnostic test in many places once distribution and cold-chain costs are included.
The health implications for Africa
First, regulation must keep pace. Kenya’s Pharmacy and Poisons Board and Nigeria’s National Agency for Food and Drug Administration and Control have both issued pragmatic guidelines for AI as a medical device in the past 18 months — a quiet but crucial step that many larger economies still struggle with.
Second, local data must remain local where necessary. The most accurate algorithms for sickle-cell disease, cervical cancer pre-screening, or paediatric pneumonia in African children are being trained on African data sets. Founders and governments that insist on data residency and local model ownership are building strategic assets, not just health tools.
Third, financing models must shift from perpetual donor pilots to sustainable integration. Rwanda and Ghana are already bundling AI diagnostics into their national health insurance schemes. When a service is reimbursed at USD 1–2 per test instead of being grant-dependent, scale happens overnight.
Risks and limitations of LLMS
Despite the transformative potential of large language models in healthcare, their deployment is not without significant risks and limitations. One of the most widely discussed concerns is hallucination, where models generate confident but incorrect or fabricated outputs. In clinical or healthcare-adjacent settings, such errors can have serious consequences, including misinterpretation of medical information, inappropriate recommendations, or erosion of trust in clinical decision-making processes.
LLMs are also highly dependent on the quality, scope, and representativeness of their training data. Biases embedded in historical healthcare data, such as underrepresentation of certain populations, can be learned and amplified by these systems, potentially leading to inequitable outcomes. Additionally, LLMs lack true contextual understanding and clinical reasoning; they do not possess intent, awareness, or accountability, and therefore should not be relied upon as autonomous decision-makers.
While large-scale, peer-reviewed evidence of widespread harm is still emerging, the consensus across the literature emphasizes the necessity of human oversight, rigorous validation, and domain-specific safeguards. LLMs are best positioned as decision-support tools rather than replacements for clinical expertise.
Additionally, issues related to data privacy, security, and regulatory compliance remain unresolved in many implementations. Without robust governance frameworks, the integration of LLMs into healthcare systems risks violating patient confidentiality and existing ethical standards.
However, these advances mean that, by 2030, a child born in a village outside Kisumu or Kumasi will not need to travel 200 kilometers (124 miles) to see whether a skin lesion is cancerous or whether a cough is tuberculosis. A trained community health worker, a USD 120 smartphone, and an LLM model continuously updated over 5G will provide an answer in minutes, not months.
We are not waiting for some distant singularity. In parts of Africa, the future of healthcare has already started — quietly, incrementally, and at a speed that most global observers still underestimate.