THE FOUNDATIONS OF AI POLICY: A PHILOSOPHICAL CRITIQUE OF UNESCO RECOMMENDATION ON THE ETHICS OF ARTIFICIAL INTELLIGENCE

Authors

  • Ikechukwu Bartholomew Ekemezie, Ph.D & Anthony Ugochukwu Nwokoye Author

Keywords:

artificial intelligence, AI policy, philosophical foundations, human dignity, ethics of AI, algorithmic governance

Abstract

The rapid proliferation of artificial intelligence systems across societal domains has triggered an urgent need for robust policy frameworks capable of governing these technologies while preserving fundamental human values. It has become necessary to conduct a critical philosophical examination of the foundations underlying contemporary AI policy, especially as current frameworks suffer from insufficient ontological grounding and inadequate attention to the metaphysical distinctions between human and artificial agency. Drawing on the moderate realist tradition of Aristotle and Thomas Aquinas, as well as Kantian deontology associated with Immanuel Kant and contemporary virtue ethics, this analysis demonstrates that effective AI governance requires the prior resolution of foundational questions concerning human dignity, moral agency, and the limits of algorithmic delegation. An examination of the UNESCO Recommendation on the Ethics of Artificial Intelligence shows its feasibility in addressing core AI challenges, including algorithmic bias, opacity, and accountability gaps. However, ethical principles—particularly human dignity, non-delegable agency, and technological subsidiarity—must serve as the normative bedrock upon which policy mechanisms are constructed, rather than as supplementary considerations appended to primarily technical frameworks. Thus, sustainable AI governance demands philosophical clarity about what distinguishes human beings from artificial systems, and policies that fail to acknowledge this distinction risk undermining the very values they purport to protect.

Downloads

Published

2026-04-01