AN ETHICAL APPROACH IN DECISION MAKING BETWEEN HUMANS AND INTELLIGENT MACHINES

Authors

  • Uzowulu Onyeka Emmanuel & Prof. Charles Nweke Author

Abstract

The accelerating advancement of Artificial Intelligence (AI) and intelligent machines presents unprecedented challenges to human-centered ethical decision-making. As these systems increasingly participate in critical domains such as healthcare, law, business, and security, the research investigates case studies where intelligent machines assist or substitute human judgment, highlighting both the potential for improved efficiency and the risks of moral displacement, bias, and opacity. The research addresses the question, how should ethical responsibility be shared or distinguished between humans and intelligent machines? Are there ethical frameworks that guide the decision-making processes in contexts where human judgment and machine intelligence intersect? By comparing human ethical reasoning with algorithmic logic, the research emphasizes the limitations of machine-led decision-making in capturing moral nuance, empathy, and contextual sensitivity. In the end, the researchers draw on normative theories, particularly teleological (consequence-based) and deontological (duty-based) ethics to examine how principles of fairness, accountability, autonomy, and the prevention of harm can be applied in hybrid human–machine decision environments.

Downloads

Published

2025-10-02