HealthAI's Recommendations to the WHO Science Council on Responsible Technologies in Global Health
Introduction
Advancing responsible digital adoption is crucial for global health equity and resilience. But what will it take for countries to harness technology’s full potential to transform global health in the coming decade?
In January 2025, the World Health Organization's Science Council invited input on this critical question, presented in the draft report “Advancing the Responsible Use of Digital Technologies in Global Health.” The document comes at a pivotal moment of fast-paced technological development when countries increasingly recognize that digital transformation is urgent to scale the adoption of AI and other technologies in health.
The draft report highlighted four critical issues that must be addressed to advance the responsible use of digital technologies in global health: multi-stakeholder action, sustained and coordinated investments across sectors, upskilling of health professionals, and building the digital literacy of the general public to foster trust.
Drawing from our work promoting responsible AI in health and strengthening its governance and regulation, HealthAI - The Global Agency for Responsible AI in Health presented recommendations for the WHO Science Council to expand the scope of the report with key considerations for responsible AI in health. In the following sections, we present an overview of our contribution.
Bringing AI to the Center of Digital Transformation
While the Science Council's report provides a comprehensive roadmap for digital transformation, HealthAI believes it should highlight AI’s central role in the current landscape. In addition, we urged the Science Council to highlight that governance and regulation are not obstacles but valuable mechanisms countries can leverage in their digital transformation path.
A growing number of countries recognize AI’s game-changing potential. In the landmark resolution on artificial intelligence adopted by the UN General Assembly in March 2024, representatives recognized AI as a potential catalyst to help them “recover lost ground” when it comes to digital transformation and achieving sustainable development goals. By focusing on the transformative power of AI, the global health community can inspire progress in the digital foundations that enable responsible development and deployment. An AI-centric approach, far from being premature, may provide the impetus needed to address long-standing barriers and accelerate the digitalization of health.
Leveraging Governance and Regulation
However, investments in AI must be coupled with governance structures to ensure they will lead to lasting benefits for countries. Strengthening efficient regulatory structures and upskilling regulators is arguably the most effective way to build confidence in the digital transformation of healthcare. The trajectory of groundbreaking industries in the 20th century - such as pharmaceuticals - has proven the importance of regulation in gaining public confidence and wide-scale adoption.
Strengthening countries’ capacity to adapt their regulatory structures for the age of AI will result in stronger policies, regulations, and institutions. Such improvements in the regulatory process should increase trust and reduce the risks of digital tools and services, contributing to a thriving market for health solutions.
Countries that proactively establish such governance structures will enable their local innovations to have a distinct advantage in the global digital health landscape. This is the case with pharmaceuticals and medical devices, which may benefit from regulatory reliance when entering new markets—a process where a regulatory authority from one country may consider evaluations performed by another jurisdiction’s authority in their pre-market assessments. Such a product might even benefit from regulatory recognition, when the evidence of conformity produced by another jurisdiction is considered sufficient to meet regulatory requirements in a new market.
Seven Actionable Recommendations
In addition to the points above, HealthAI's submission urges the Science Council to consider adding the following recommendations to the final report:
1. Establish or update digital health standards, best practices, and regulations to include AI-specific requirements: Clear and assertive regulatory requirements for AI would benefit innovators by providing predictability and guidance for safe and responsible development. While current frameworks address traditional digital health technologies under medical device regulations, generative AI and other advanced AI systems introduce unique risks requiring specialized assessment methodologies. The Science Council should promote international coordination to develop comprehensive pre-market assessment and post-market surveillance mechanisms designed for AI healthcare applications outside existing “software as a medical device” frameworks. Periodic monitoring and evaluation should demonstrate, for instance, without prejudice to further requirements:
AI model robustness across diverse demographics and clinical settings
Systematic documentation of training data, model weights, and bias assessments
Specification of intended use, operational boundaries, and failure modes
Technical, clinical, and ethical impact assessments
Regular audits and red-teaming with both computer science and clinical experts
2. Build capacity with public sector officials: Through our work with countries, we have seen firsthand that health sector regulators and policymakers would greatly benefit from digital and AI literacy programs. Upskilling public sector officials is not only part of HealthAI's mission, but a crucial factor in creating regulatory environments conducive to innovation and better healthcare through safe and trustworthy technological solutions.
3. Promote international coordination for standardized quality management systems and post-deployment evaluation protocols: AI systems used in healthcare should be assessed for both technical accuracy and real-world clinical impact, including patient outcomes, workflow integration, and care delivery quality. A balanced approach should acknowledge the critical role of clinical usability and effectiveness alongside technical metrics.
4. Establish an international incident reporting framework for AI in healthcare: The high stakes of healthcare delivery and the potential for AI incidents to impact entire populations demand a dedicated reporting framework beyond existing mechanisms for medical devices. Considering that technical evaluations alone cannot capture all risks to patient safety and public health, HealthAI recommends a multi-stakeholder process to establish robust monitoring and reporting protocols. Such a process should actively engage civil society, healthcare providers, technology developers, patients, and community representatives in surveilling and reporting adverse impacts. This collaborative approach would strengthen harm prevention while fostering public trust. In addition, comprehensive redressal and accountability mechanisms must be established.
5. Mobilize governments across ministries and agencies: Advancing the right policies to support digital health depends on coordinated action across government institutions. Governments should move beyond traditional regulatory roles to actively orchestrate cross-agency collaboration and public-private partnerships toward health innovation goals. For instance, health authorities should work closely with teams responsible for data governance regulation and infrastructure policies to converge efforts towards the digital transformation mission.
6. Create blueprints for institutional coordination: To facilitate the operationalization of recommendation n.2, HealthAI recommends creating blueprints for inter-institutional coordination that governments can apply to their contexts. Such blueprints may take the form of guidelines, best practices, and policy recommendations outlining how to optimize the patchwork of overlapping remits in government institutions and establish workflows that allow for faster implementation of digital transformation policies at the national level.
7. Create blueprints for multi-stakeholder coordination: Blueprints for multi-stakeholder coordination will speed up a more effective adoption of digital health. Beyond overcoming institutional complexity at the government level, advancing digital health requires special attention to interoperable and harmonized practices along the supply chain, from data and infrastructure to pre-market assessments and post-market surveillance practices. Such an effort requires coordination with the private sector, data holders, civil society representatives, and others.
HealthAI's Commitment to Responsible AI in Health
HealthAI doesn’t just advocate for responsible AI in health. The recommendations above reflect a range of activities we have been executing. As an implementation partner to countries worldwide, we help governments and health authorities advance their capabilities to assess and regulate AI tools in the healthcare sector. Our efforts span from technical guidance on evaluation frameworks to policy support for creating regulatory environments that enable innovation.
In addition, HealthAI has convened a global multi-stakeholder community of practice to promote knowledge sharing and coordination at the intersection of AI governance and health. By bringing together regulators, policymakers, technologists, civil society representatives, and other key actors, we aim to level the playing field and ensure that the development and deployment of AI in health benefits from diverse perspectives and expertise.
Looking ahead, HealthAI is committed to creating practical tools and resources to help stakeholders navigate the responsible use of AI in health. These include a regulatory maturity assessment framework for health authorities to benchmark their AI governance capabilities, a global early warning system to detect and flag adverse events related to AI deployment, and a global repository of validated AI tools and best practices. By translating principles into practice, we seek to support the operationalization of responsible AI in real-world health contexts.
Conclusion
HealthAI commends the WHO Science Council’s thought leadership and inclusive approach in developing the report. We appreciate the opportunity to contribute our expertise to this critical endeavor. HealthAI stands ready to support WHO, member states, and the global health community in advancing the responsible digitalization of health, with a particular focus on leveraging AI to improve health outcomes and equity worldwide. Together, we can build a future in which digital transformation is harnessed for the benefit of all.