Algorithmic Bias and Discrimination: Legal Accountability of AI Systems
Authors: Subholaxmi Mukherjee
DOI: https://doi.org/10.37082/IJIRMPS.v13.i4.232659
Short DOI: https://doi.org/g9t2vn
Country: India
Full-text Research PDF File:
View |
Download
Abstract: As Artificial Intelligence (AI) systems increasingly influence decision-making in hiring, finance, criminal justice, and public welfare, concerns about algorithmic bias and systemic discrimination have become urgent. This article examines how AI systems, often assumed to be neutral, can replicate and amplify social prejudices embedded in data or design. Anchored in recent international case law—from the U.S., UK, and France—and early Indian experiences with judicial and administrative use of AI, the paper explores the emerging legal and institutional responses to algorithmic discrimination. It also analyses government reports such as India’s 2025 AI Advisory Framework and the creation of the IndiaAI Safety Institute. Building on these developments, a dedicated section synthesizes their legal implications and articulates an analytical framework to assign accountability in AI ecosystems. The article argues that India must adopt a rights-based, transparent, and auditable regulatory regime to ensure fairness in algorithmic governance, bridging the gap between technological advancement and constitutional mandates.
Keywords: Algorithmic discrimination, bias in AI, legal accountability, data protection, AI regulation, fairness, transparency, liability, ethical AI, human rights
Paper Id: 232659
Published On: 2025-07-23
Published In: Volume 13, Issue 4, July-August 2025