
On no day in particular at our busy community health center, I was having my routine battle with our phone-based language interpreting service. The patient I was with spoke Chuukese, a language spoken by approximately 50,000 people from a few islands in Micronesia. The interpreting service our organization was contracted with had, at any given moment, a maximum of 2 interpreters who could speak the language. Wait times for them could be long, and for this patient, I had been on hold for over an hour. I gave up and tried my best with a combination of Google Translate and very basic English.
In just a few years, scenarios like these may soon be a distant memory. Large-language models such as OpenAI’s ChatGPT-4o model have developed the capability to provide real-time, accurate language translation (1). This technology is astounding. It offers a pathway for non–English-speaking patients to better understand their care plans, make changes to their medications, and communicate with their physicians. By allowing patients to effortlessly communicate in their native language, it could also grant a greater sense of autonomy and foster greater confidence in navigating the health care system. On the physician side, artificial intelligence (AI) offers potentially huge gains in reducing administrative burden, augmenting clinical decision making, and improving overall clinical efficiency. These advancements offer a glimpse into how AI could reshape health care for underserved populations. However, as highlighted in a recent policy paper by the American College of Physicians (ACP), realizing this potential requires AI models to be developed with inclusivity in mind, ensuring that they are trained on diverse and representative data sets in order to not paradoxically deepen health care disparities (2).
ACP’s policy paper emphasizes the importance of AI model development using data from the diverse populations it aims to serve (2). These populations include underrepresented, socially marginalized, and disadvantaged patients. AI models, including language translation tools, are only as effective as the data they are trained on. In the case of the Chuukese-speaking patient, if the model were not trained on sufficiently robust linguistic and cultural data, important health information could be mistranslated, thus worsening care. Another troubling consideration is that if physicians use insufficiently trained AI models for clinical decision making, implicit bias and harm could be automated. For example, studies referenced in the policy paper state that certain dermatology-related algorithms perform worse on darker skin tones than lighter ones (3, 4). If similarly flawed algorithms were embedded into an AI model used by physicians nationwide, AI could “automate inequality” rather than the opposite, as stated in Israni and Verghese’s paper (3).
As a medical student, it is astonishing that I can engage with AI technology as a regular part of my day-to-day. I had thought we were still a long way off from the Jetsons-style integration of AI into everyday tasks. But it is here now, advancing exponentially with new models emerging seemingly every day. Over the course of my career, I expect it to continue to evolve and advance as a tool, shaping and changing the way we practice medicine. And like with all technology, we must learn to be good stewards of its use. We are at a pivotal moment where the choices made in developing, implementing, and regulating AI will shape the future for decades to come. It is not enough to merely know when to question AI and when to trust it. As physicians, we must actively participate in its evolution. Advocating for equity, transparency, and a patient-centered approach gives us the opportunity to turn AI into a powerful ally that enhances care for all patients, regardless of background or language, rather than a barrier. Ultimately, the future of medicine rests not just in the hands of technology, but in the wisdom and care of the physicians who utilize it. That part of medicine will never change.
References
- Duffy C. OpenAI unveils newest AI model, GPT-4o. CNN. 13 May 2024. Accessed at www.cnn.com/2024/05/13/tech/openai-altman-new-ai-model-gpt-4o/index.html on 25 March 2025.
- Daneshvar N, Pandita D, Erickson S, et al; ACP Medical Informatics Committee and the Ethics, Professionalism and Human Rights Committee. Artificial intelligence in the provision of health care: an American College of Physicians policy position paper. Ann Intern Med. 2024;177:964-967. [PMID: 38830215] doi:10.7326/M24-0146
- Daneshjou R, Vodrahalli K, Novoa RA, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv. 2022;8:eabq6147. [PMID: 35960806] doi:10.1126/sciadv.abq6147
- Israni ST, Verghese A. Humanizing artificial intelligence. JAMA. 2019;321:29-30. [PMID: 30535297] doi:10.1001/jama.2018.19398
No comments:
Post a Comment
By commenting on this site, you agree to the Terms & Conditions of Use.