A photo taken in the late hours of Aug 17, 2020 shows a sign of the World Health Organization (WHO) at their headquarters in Geneva. (PHOTO / AFP)

GENEVA – The World Health Organization (WHO) has called for caution when deploying large language model tools (LLMs) generated by artificial intelligence (AI).

In a statement released Tuesday, WHO said it was imperative for the risks of LLMs to be carefully examined.

WHO noted that precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, and erode trust in AI

LLMs are used to improve access to health information, as decision-support tools, and to enhance diagnostic capacity in under-resourced settings.

WHO has warned that the caution normally exercised for new technologies is not being consistently applied to LLMs.

ALSO READ: WHO: COVID-19 no longer global health emergency

WHO noted that precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, and erode trust in AI. This could undermine or delay the potential long-term benefits of such technologies.

Therefore, WHO has called for rigorous oversight of LLMs, to ensure they are used in safe, effective, and ethical ways.

As technology firms work to commercialize LLMs, policy-makers must ensure patient safety and protection, WHO noted.

Clear evidence of the benefits of LLMs must be measured before they can be used on a large scale in routine healthcare and medicine – whether by individuals, care providers or health system administrators and policy-makers.

READ MORE: EU lawmakers' committees agree tougher draft AI rules

The WHO's guidance on the ethics and governance of AI for health, released in June 2021, emphasizes the importance of applying ethical principles and appropriate governance when designing, developing, and deploying AI for health.