
Source: Elsevier's Understand the concepts and functions of AI
Coalition for Health AI (CHAI)
The mission of the Coalition for Health AI (CHAI) is to advance the responsible development, deployment, and oversight of AI in healthcare by fostering collaboration across the health sector, including industry, government, academia and patient communities.
At our core, CHAI convenes groups of multi-stakeholder organizations across the healthcare ecosystem to develop consensus-driven responsible AI content. CHAI is developing applied resources related to best practice guidance and evaluation frameworks across a range of work groups.
Topol EJ. How AI Can Promote Accuracy and Empathy in Medicine. 2021.
https://www.youtube.com/watch?v=iclfpf2Gv6Y
Responsible use of AI in evidence SynthEsis (RAISE):
Recommendations and guidance ๐
Abstract
The evidence synthesis community is being inundated with a plethora of AI tools, each promising to streamline the process. However, understanding how and when to use AI, or even how to properly document use of AI,is not straightforward. Technical, ethical and organisational challenges abound which evidence synthesists need to be aware of before being able to make an informed decision regarding the use of AI. The existence of AI does not alone justify its use, and its misuse cannot only hamper the evidence synthesis process, it can be detrimental to the endeavour and introduce or exacerbate harms.The RAISE project aims to address these challenges and is writing guidance, currently split into three papers.
The first paper, ‘RAISE 1’, provides tailored recommendations for eight of the distinct roles in the evidence synthesis ecosystem: evidence synthesists, methodologists, AI tool developments teams, organisations that produce evidence synthesis, publishers, funders, users,and trainers of evidence synthesis methods.
RAISE 2 contains guidance on building and evaluating AI evidence synthesis tools, which focuses on determining if an AI tool does what it claims to do to an acceptable standard, including how to build and validate AI tools, conduct evaluations to build a cumulative evidence base, including performance metrics to consider, and how to report evaluations.
This paper, ‘RAISE 3’,aims to offer guidance around selecting and using AI evidence synthesis tools. Whilst the field continues to evolve rapidly, we have tried to provide an overview of the current state of AI in evidence synthesis, before providing guidance on how to assess a tool in terms of both its external and internal validity, as well as provide guidance on key ethical, legal and regulatory considerations.
DOI 10.17605/OSF.IO/FWAUD (accessed 19/4/26 via https://osf.io/cqa82)
Example of misconduct while using AI tools
"Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing.
During the FDA inspection of your drug manufacturing facility, you stated to FDA investigators that you utilized artificial intelligence (AI) agents (b)(4) to help your firm comply with FDA regulations. Specifically, you used AI to create drug product specifications, procedures, and master production or control records to be in compliance with FDA requirements.
If you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP. Your failure to do so is a violation of 21 CFR 211.22(c). Overreliance on artificial intelligence for your drug manufacturing operations was also documented during the inspection. For example, the FDA investigators found that you had not conducted process validation prior to distribution of your drug products, as required under 21 CFR 211.100, and informed you as such. You replied that you were not aware of the legal requirement, as the AI agent you used (b)(4), never told you it was required."
Using RAG tools responsibly means keeping the Human-in-the-loop

Source: make-smarter-decisions-with-the-help-of-ai-augmentation/
Critically Evaluating AI
-
- When was the AI model last trained/updated? Can you find out?
- Does the information it provides reflect recent developments?
- Are its responses outdated given rapid changes in AI capabilities?
- Does the AI tool cite recent sources or is the dataset relying on older information?
-
- Is AI the right tool for this specific task? Are there more appropriate methods or sources?
- What are the ethical implications of using AI for this purpose?
- Does using AI align with academic integrity policies?
- How well does it address your prompt or the specific context?
- Do you need to ask further questions to focus the response?
-
Authority (the source of the information)
- Which company/organisation developed the AI tool?
- What are the model's known limitations and capabilities?
- How transparent is the company about its training data, environmental practices and privacy protections?
- What biases might be present in the training data?
- How consistent are the tool’s responses across multiple attempts?
- Is the training data from a variety of sources from around the world? Or does it depend on Western-centric and English-language sources?
-
Accuracy (reliability and correctness)
- Can the AI tool's outputs be verified through your own knowledge or other sources?
- Does it provide citations or references for factual claims? Do these exist and do they support the claims made?
- Does it acknowledge uncertainty when appropriate?
- How does it handle complex or nuanced topics? Are the responses logical and coherent?
-
Purpose
- What is the intended use of this AI tool? Is it being used for generation, analysis, or fact-checking?
- What are the commercial interests behind the AI models? How might these interests affect its outputs?
- How is the company collecting/using your data?
- What guardrails has the company put in place? Have they been implemented ethically?
- Does it read like fact or opinion? How can you test its assertions?
- Does the response sound objective and unbiased? Are its references objective and unbiased?
- Does it provide multiple perspectives in answering your question or is the response skewed toward one viewpoint, culture or language?
Clinical Supervision
DEFT-AI: Diagnosis (Discussion, and Discourse); Evidence; Feedback; Teaching
|
AI engagement behaviors should vary moment to moment, adapting to the type of task and its associated risk. The degree of human–AI integration can range from strategic delegation (centaur) to tight collaboration (cyborg). Centaur strategies are appropriate when human judgment must lead, particularly in high-stakes tasks or in situations in which the AI tool has not been validated for the specific intended use. With a centaur mindset, the clinician carefully delegates to AI and always evaluates the output. Cyborg strategies may enhance efficiency for low-risk, creative, or well-validated tasks. Adaptive AI use involves shifting between these modes on the basis of skill, task, and context. |
Source: Abdulnour REL, Gin BI, Boscardin CKT. Educational Strategies for Clinical Supervision of Artificial Intelligence Use.N Engl J Med. 2025 Aug 21;393(8):786-97. doi:10.1056/NEJMra2503232.
Ethics and governance of artificial intelligence for health:
WHO guidance Executive summary
Overview
The WHO guidance on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health. While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.
The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use.
Source: Berger RE. Critical Review of “Ethics & Governance of Artificial Intelligence for Health” by the World Health Organization (WHO). JOSHA. 2024;11(1). doi:10.17160/josha.11.1.956. ๐