Since ChatGPT became a well-known name worldwide in early 2023, it has been trialled by millions and applied to a variety of industries across the field, including healthcare. As a large language model tools (LLMs), it aims to imitate understanding, processing, and producing human communication. Recently, WHO issued an announcement calling for safe and ethical AI research for health [1]. It proposes a 6 core principles plan to address the concerns for the technologies to be used in safe, effective and ethical manner:
- protect autonomy;
- promote human well-being, human safety, and the public interest;
- ensure transparency, explainability, and intelligibility;
- foster responsibility and accountability;
- ensure inclusiveness and equity;
- promote AI that is responsive and sustainable.
AI research in medical field has flourished in recent times. A keyword ‘artificial intelligence’ search on PubMed revealed more than 35,000 articles published in the past year alone, ranging from medical imaging analysis to organ or disease-specific care such as myocardial infarction or lung cancer. Recently, an article [2] published on Biotechnology and Genetic Engineering Reviews summarized current progress achieved in the AI field highlighting constraints on smooth development in the field of medical AI sector, with discussion of its implementation in healthcare from a commercial, regulatory, and sociological standpoint.
Machine learning
To start with, the machine’s ability for automatic learning by leveraging on available data, to enhance performance as per the experience achieved by giving appropriate prediction is called machine learning and is the fundamental aspect of AI. It can be either supervised (man-labelled dataset), or unsupervised (unlabelled dataset). It is useful for identifying intricate patterns in a massive amount of data, and to see or predict what our brain could not possibly do in a short period of time.
Disease diagnosis and prediction
Human doctors make diagnosis based on a patient’s symptoms, physical examination findings, and investigation results. AI can do the same.
Data from multiple sources can be merged and put into the prediction model developed by AI from machine learning of previous datasets. As an example, one of the top artificial intelligence and machine learning tools in healthcare, Path AI allows pathologists to make precise diagnoses, minimizes errors in the cancer diagnostic process and provides a plethora of new methods for customized medical care.
Another example would be Alzheimer’s disease detection. Human doctors diagnose the condition mainly based on clinical symptoms with the help of brain CT. Giving proper training of confirmed Alzheimer’s CT image, AI model has demonstrated 90% accuracy in detecting Alzheimer’s disease if given a new brain CT image with unknown diagnosis.
Prognosis based on pathology
Doctors frequently get asked by patients how long it takes for them to recover, or how long do they have left. Most doctors would find it hard to answer and making mistakes on numbers is always embarrassing. A collection of monitoring data of health using devices, mobile sensors and similar devices is performed, and processing is done using health-care applications along with patient-provided data on behaviours, emotional status, and dietary patterns. Some of these applications use deep learning algorithms to identify trends in the data (pathology result, imaging studies result), improve projections and provide patients with tailored treatment suggestions based on their prognosis.
Prediction of drug properties and interactions
On a micromolecular level, at preliminary stages of drug discovery, computational models can be used to assess chemical collections and to direct the design and optimization of molecules. It saves company’s time and monetary resources spent on in situ pharmacokinetics experimentation. Instead, a computer model can predict the best way forward.
For patients on multiple medications, pharmacists are relied on to perform ‘medication reconciliation’ to recognize the potential side effects caused by interactions or toxicity. Quantitative Structure Activity Relationship or QSAR is a technique that establishes quantitative connections between pharmacological action and chemical or structural properties, thus predicting adverse effects, potential interactions and significantly promotes safe use of medications.
Online medical assistance and hospital management
Hospital management can now use AI model to predict duration of stay or ICU mortality based on patients’ features (diagnosis, severity, background etc). It can help estimate how many beds can be freed over a given period of time and help the city as a whole manage patient flow and admission destinations. Meanwhile, on a grassroot level, AI has long been used to analyse risk for cardiovascular disease, and a variety kinds of cancer. People can use the model to predict their own risk of having such disease and adjust their lifestyle accordingly.
However, it is because of those seemingly ‘magical’ power of AI in healthcare, WHO released the series of concerns [1] awaiting to be addressed:
- Biases in data used for AI learning, resulting in misleading or incorrect information posing risks to health, equity and inclusiveness
- LLMs-generated responses, particularly healthcare-related, though appear plausible may consist of major errors or be outright incorrect
- Unconsented use of users’ unprotected sensitive data used to train LLMs
- Generation and dissemination of disinformation difficult to discern from credible health content; and
- Assurance of patient safety and protection by policy-makers alongside advancing AI use and digital health
So far, what worries the medical community most is the lack of legislation and ethical code in most jurisdictions on how to ethically use AI on healthcare. The most recent review [3] in this field published by Karimian et al. in 2022 found that many published articles had limited examination of ethical principles in terms of consideration for design or deployment of AI. Some mentioned ethical principles such as fairness, preservation of human autonomy, explicability and privacy. However, very limited research was conducted on how to prevent harm from AI. To build a trustworthy AI system for healthcare, either for research purposes or for direct patient care, we have to develop a legally binding framework to prevent harm. To do that, we need multidisciplinary cooperation between different industries, maximize procedural transparency, and establish a regulatory body with supervising power.
Overall, while applauding the magnificent achievement of AI in the healthcare field in the past few years, the community should remain vigilant and call for ethical use. As always, it usually takes years for legislation to catch up with the advancement of technology, but it’s up to the frontline researchers and practitioners to come up with ethical codes and guidelines with the ultimate goal to promote an equal, affordable and developed healthcare for all mankind.
Reference:
1.WHO calls for safe and ethical AI for health [Internet]. www.who.int. 2023 [cited 2023 May 20]. Available from: https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health
2.Mukherjee J, Sharma R, Dutta P, Bhunia B. Artificial intelligence in healthcare: a mastery. Biotechnology & Genetic Engineering Reviews [Internet]. 2023 Apr 4 [cited 2023 Apr 20];1–50. Available from: https://pubmed.ncbi.nlm.nih.gov/37013913/
3.Karimian G, Petelos E, Evers SMAA. The ethical issues of the application of artificial intelligence in healthcare: a systematic scoping review. AI and Ethics. 2022 Mar 28.