Between Ethics and Technology: Why Artificial Intelligence May Fail in Public Health

Artificial intelligence (AI) is often hailed as the cornerstone of the future of public health, promising more precise, accessible, and personalized medicine. However, its deployment raises major ethical questions that, if ignored, can compromise its effectiveness and acceptance. Why can AI, despite its potential, fail to address public health challenges? This article explores the ethical, social, and technological challenges that hinder its success.
-
Digital Mental Health Through Real-Life Experiences: How Artificial Intelligence Is Reshaping the Landscape of Modern Psychotherapy
-
Artificial Intelligence: A Tool for Creators or a Competitor?
AI: A Promise in Tension with Human Realities
AI in public health is supposed to automate diagnostics, predict epidemics, optimize treatments, and alleviate healthcare systems. However, its effectiveness largely depends on the quality of the data used to train it. Unfortunately, these data are often incomplete, biased, or unrepresentative of certain populations, leading to erroneous or discriminatory decisions.
Algorithmic Bias and Social Inequalities
AI algorithms are designed to learn from historical data. If these data reflect social, racial, or economic inequalities, AI risks reproducing and even amplifying them. For instance, a CDC study revealed that AI systems in healthcare can perpetuate disparities due to biases in training data . Similarly, research has shown that AI can exacerbate existing health inequities if deployed without a critical assessment of the data used .
-
Does Artificial Intelligence Provide Accurate Health Advice?
-
Britain criminalizes the use of artificial intelligence for child sexual exploitation
Lack of Transparency and Accountability
AI models, particularly deep neural networks, are often “black boxes,” meaning their decisions are not always understandable to humans. This lack of transparency raises questions about accountability in case of errors. If an automated decision causes harm, it can be difficult to determine who is responsible: the developer, the user, or the system itself.
Ethical Challenges Specific to Public Health
Informed Consent and Data Privacy
Using AI in public health involves collecting and analyzing vast amounts of personal, often sensitive, data. This raises concerns about data privacy and individuals’ informed consent. Citizens must be informed about how their data are used and have the opportunity to give or withdraw consent.
-
Positive and Negative: How Artificial Intelligence Affects the Environment
-
“Neural Networks” in Artificial Intelligence: What Do They Mean and What Is Their Role?
Risks of Dehumanizing Care
By automating certain tasks, AI can reduce human interactions in healthcare. This may lead to the dehumanization of patient-provider relationships, which are essential for quality care. Empathy, understanding, and moral support, central to medical practice, risk being neglected in favor of technological efficiency.
Notable Failures of AI in Public Health
Babylon Health: Between Promises and Controversies
Babylon Health, a UK-based digital health company, launched an AI-powered medical chatbot claiming to offer consultations as effective as those of a human doctor. However, studies have questioned these claims, highlighting gaps in clinical evaluation and concerns about patient safety .
-
Will Artificial Intelligence Play a Role in Cancer Detection?
-
Can Artificial Intelligence Be a Safe Partner in the Workplace?
Hasty Deployment and Lack of Regulation
The World Health Organization (WHO) has warned that the hasty adoption of untested AI systems could lead to medical errors, harm patients, and erode trust in these technologies . The lack of regulation and rigorous evaluation before deploying AI in public health is a key factor in these failures.
Towards Ethical and Inclusive AI in Public Health
For AI to succeed in public health, it must be developed and deployed ethically and inclusively. This involves:
-
“Artificial Intelligence Police” Working Hand in Hand with Chinese Security to Combat Crime
-
Artificial Intelligence Puts Software Engineers at Risk
- Collecting diverse and representative data to reduce biases and ensure equitable decisions.
- Ensuring algorithmic transparency so users can understand how decisions are made.
- Establishing clear accountability to define who is responsible in case of errors or harm.
- Respecting privacy and informed consent to ensure individuals’ data are protected and used ethically.
- Preserving humanity in care by ensuring AI complements, without replacing, human interaction in healthcare.
Artificial intelligence offers significant opportunities to improve public health, but its success depends on its ethical and thoughtful integration into healthcare systems. Without particular attention to ethical, social, and technological challenges, AI risks failing to meet its promises and exacerbating existing inequalities. It is essential for policymakers, researchers, and practitioners to collaborate in developing AI solutions that are not only effective but also fair, transparent, and human-centered.