Society

Between Ethics and Technology: Why Artificial Intelligence May Fail in Public Health


Artificial intelligence (AI) is often hailed as the cornerstone of the future of public health, promising more precise, accessible, and personalized medicine. However, its deployment raises major ethical questions that, if ignored, can compromise its effectiveness and acceptance. Why can AI, despite its potential, fail to address public health challenges? This article explores the ethical, social, and technological challenges that hinder its success.

AI: A Promise in Tension with Human Realities

AI in public health is supposed to automate diagnostics, predict epidemics, optimize treatments, and alleviate healthcare systems. However, its effectiveness largely depends on the quality of the data used to train it. Unfortunately, these data are often incomplete, biased, or unrepresentative of certain populations, leading to erroneous or discriminatory decisions.

Algorithmic Bias and Social Inequalities

AI algorithms are designed to learn from historical data. If these data reflect social, racial, or economic inequalities, AI risks reproducing and even amplifying them. For instance, a CDC study revealed that AI systems in healthcare can perpetuate disparities due to biases in training data . Similarly, research has shown that AI can exacerbate existing health inequities if deployed without a critical assessment of the data used .

Lack of Transparency and Accountability

AI models, particularly deep neural networks, are often “black boxes,” meaning their decisions are not always understandable to humans. This lack of transparency raises questions about accountability in case of errors. If an automated decision causes harm, it can be difficult to determine who is responsible: the developer, the user, or the system itself.

Ethical Challenges Specific to Public Health

Informed Consent and Data Privacy

Using AI in public health involves collecting and analyzing vast amounts of personal, often sensitive, data. This raises concerns about data privacy and individuals’ informed consent. Citizens must be informed about how their data are used and have the opportunity to give or withdraw consent.

Risks of Dehumanizing Care

By automating certain tasks, AI can reduce human interactions in healthcare. This may lead to the dehumanization of patient-provider relationships, which are essential for quality care. Empathy, understanding, and moral support, central to medical practice, risk being neglected in favor of technological efficiency.

Notable Failures of AI in Public Health

Babylon Health: Between Promises and Controversies

Babylon Health, a UK-based digital health company, launched an AI-powered medical chatbot claiming to offer consultations as effective as those of a human doctor. However, studies have questioned these claims, highlighting gaps in clinical evaluation and concerns about patient safety .

Hasty Deployment and Lack of Regulation

The World Health Organization (WHO) has warned that the hasty adoption of untested AI systems could lead to medical errors, harm patients, and erode trust in these technologies . The lack of regulation and rigorous evaluation before deploying AI in public health is a key factor in these failures.

Towards Ethical and Inclusive AI in Public Health

For AI to succeed in public health, it must be developed and deployed ethically and inclusively. This involves:

Artificial intelligence offers significant opportunities to improve public health, but its success depends on its ethical and thoughtful integration into healthcare systems. Without particular attention to ethical, social, and technological challenges, AI risks failing to meet its promises and exacerbating existing inequalities. It is essential for policymakers, researchers, and practitioners to collaborate in developing AI solutions that are not only effective but also fair, transparent, and human-centered.

Show More

Related Articles

Back to top button
Verified by MonsterInsights