IS TRUTH AND INTEGRITY OF JUSTICE ON TRIAL IN THE AGE OF ARTIFICIAL INTELLIGENCE (AI)? RE-ASSESSING DIGITAL EVIDENCE IN THE CONTEXT OF AI-GENERATED EVIDENCE SUCH AS DEEPFAKES
Abstract
The integration of Artificial Intelligence (AI) into legal systems presents profound challenges and opportunities, particularly the admissibility, reliability, and ethical use of AI-generated evidence. This paper explores the evolving intersection of AI technologies and the standards for digital evidence in Nigeria’s legal framework. It also examines how existing laws accommodate, or fail to accommodate, the complexities introduced by AI-generated content. As AI becomes more embedded in surveillance systems, automated decision-making, legal research, and forensic tools, the evidentiary process faces new questions: Can AI-generated content meet the requirements for admissibility under the Nigerian Evidence Act? How do we safeguard against manipulation, hallucination, or opacity in algorithmic systems? What standards or protocols are necessary to authenticate and verify the accuracy of synthetic media or AI-generated evidence in legal cases? The study employs a doctrinal and comparative methodology, analysing Nigerian statutory laws (including the Evidence Act, 2011 (as amended in 2023) and the Nigeria Data Protection Act, 2023), recent judicial trends, and regulatory models from the United States, European Union, and international human rights instruments. The study also incorporates real-world examples and hypothetical scenarios, such as deepfakes, encrypted surveillance footage, and AI-generated court documents, to highlight the technical and legal challenges posed by digital evidence. Central issues addressed include the ‘liar’s dividend’ (where genuine evidence is doubted due to the plausibility of fakes), the opacity of black-box algorithms, and the need for transparency, reliability, and expert validation in courtroom contexts. The researcher is of the view that Nigeria’s current legal framework is ill-equipped to handle the rise of AI-generated evidence without targeted reforms. This work recommends a comprehensive strategy involving the enactment of AI-specific evidentiary legislation, or further amendments to the Evidence Act, and the establishment of verification protocols for contested digital content. Further proposals include judicial training, AI impact assessments for legal technology, and adherence to international data protection standards such as the General Data Protection Regulations (GDPR). Ethical concerns were also addressed through recommendations for updating the Rules of Professional Conduct for legal practitioners using AI tools. Ultimately, this research offers a forward-thinking guide for Nigerian legal institutions, integrating legal clarity, technical safeguards, and ethical accountability in upholding fairness, transparency, and resilience in this era of AI.