Global Scientists Underscored Multidisciplinary Collaboration to Fostering Fair AI in Healthcare

AI in Healthcare

A team of global scientists led by Duke-NUS Medical School has underscored the importance of multidisciplinary collaboration in pursuing equitable artificial intelligence (AI) applications in healthcare. Their findings, presented in a perspective published in npj Digital Medicine, address the promise and challenges of AI in healthcare and advocate for a more nuanced approach to fairness.

While AI has shown immense potential in providing valuable insights for healthcare, concerns persist regarding biases in AI systems. The researchers assert that achieving a “fair” AI model is not about absolute equality across all demographic subgroups, such as age, gender, and race. Instead, it should focus on achieving equity by recognizing factors like race and gender adjusting the AI algorithm and its application to ensure that vulnerable groups receive appropriate care.

Dr. Ning Yilin, a Research Fellow at the Centre for Quantitative Medicine (CQM) at Duke-NUS, explained that patient preferences and prognosis are critical considerations. Equal treatment doesn’t always equate to fair treatment, as specific attributes, like age, play significant roles in treatment decisions and outcomes. The paper highlights the discrepancies between AI fairness research and the specific clinical needs of healthcare. While various fairness metrics exist, choosing the appropriate ones for healthcare applications can be challenging, as they may conflict. The authors stress distinguishing between meaningful differences and true biases requiring correction in the medical context.

The research advocates for the active involvement of clinicians in developing fair AI models. These experts can provide valuable context, determine whether observed differences are justified, and guide AI models toward making equitable decisions. The authors argue that achieving fairness in AI for healthcare necessitates collaboration among AI experts, medical professionals, ethicists, and more. AI in healthcare is inherently complex, with biological, ethical, and social factors at play. Clinicians, AI experts, and industry stakeholders must work together to address fairness in AI and its real-world applications.

Associate Professor Daniel Ting, Director of SingHealth’s AI Office and co-author of the paper, stressed the importance of interdisciplinary cooperation to tackle the complexities of implementing AI fairness in clinical settings. Clinical Associate Professor Lionel Cheng Tim-Ee and Professor Marcus Ong, both senior co-authors, emphasized the need for ongoing dialogue and oversight by diverse experts to ensure that AI enhances healthcare while respecting medical ethics and social considerations.

This perspective represents an international collaboration between researchers from various institutions in Singapore, Belgium, and the United States. The authors hope their collective effort will inspire multinational partnerships and advance the development of equitable and unbiased AI in healthcare. Professor Patrick Tan, Senior Vice-Dean for Research at Duke-NUS, expressed the importance of such cross-disciplinary dialogue in advancing fair AI techniques to enhance healthcare.

EDITORIAL TEAM
EDITORIAL TEAM
Al Mahmud Al Mamun leads the TechGolly editorial team. He served as Editor-in-Chief of a world-leading professional research Magazine. Rasel Hossain is supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial expertise in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.
Read More