BioScience Trends. 2024;18(6):497-504. (DOI: 10.5582/bst.2024.01342)
Applications of and issues with machine learning in medicine: Bridging the gap with explainable AI
Karako K, Tang W
In recent years, machine learning, and particularly deep learning, has shown remarkable potential in various fields, including medicine. Advanced techniques like convolutional neural networks and transformers have enabled high-performance predictions for complex problems, making machine learning a valuable tool in medical decision-making. From predicting postoperative complications to assessing disease risk, machine learning has been actively used to analyze patient data and assist healthcare professionals. However, the "black box" problem, wherein the internal workings of machine learning models are opaque and difficult to interpret, poses a significant challenge in medical applications. The lack of transparency may hinder trust and acceptance by clinicians and patients, making the development of explainable AI (XAI) techniques essential. XAI aims to provide both global and local explanations for machine learning models, offering insights into how predictions are made and which factors influence these outcomes. In this article, we explore various applications of machine learning in medicine, describe commonly used algorithms, and discuss explainable AI as a promising solution to enhance the interpretability of these models. By integrating explainability into machine learning, we aim to ensure its ethical and practical application in healthcare, ultimately improving patient outcomes and supporting personalized treatment strategies.