Geneva: The World Health Organization (WHO) has called for caution when deploying large language model tools (LLMs) generated by artificial intelligence (AI).
In a statement, the WHO said it was imperative for the risks of LLMs to be carefully examined, reports Xinhua news agency.
LLMs are used to improve access to health information, as decision-support tools, and to enhance diagnostic capacity in under-resourced settings.
The WHO has warned that the caution normally exercised for new technologies is not being consistently applied to LLMs.
The UN body notes that precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, and erode trust in AI. This could undermine or delay the potential long-term benefits of such technologies.
Therefore, it has called for rigorous oversight of LLMs, to ensure they are used in safe, effective, and ethical ways.
As technology firms work to commercialize LLMs, policy-makers must ensure patient safety and protection, the WHO noted.
Clear evidence of the benefits of LLMs must be measured before they can be used on a large scale in routine healthcare and medicine — whether by individuals, care providers or health system administrators and policy-makers.
The WHO’s guidance on the ethics and governance of AI for health, released in June 2021, emphasizes the importance of applying ethical principles and appropriate governance when designing, developing, and deploying AI for health.
(IANS)