The MANAV Moment For Indian Law: AI, Accountability, and the Future of Justice

Why courts and practitioners alike are insisting that technology must serve human judgment, not displace it

Update: 2026-02-25 04:30 GMT

The Indian legal community has moved past the first, shallow question about artificial intelligence, will it replace lawyers? The deeper question now being asked in chambers, court corridors, and classrooms is more uncomfortable and more useful. How does a profession built on duty, verification, and moral reasoning adopt a tool that can accelerate work, yet confidently invent facts.

In that recalibration, one theme has begun to dominate serious legal conversation, that a human centric approach is structural and that it is about accountability. The question is also about who is responsiblewhen technology produces plausible sounding errors, and who is answerable when speed begins to substitute for care.

The judiciary’s interventions have been among the clearest signposts. Justice Surya Kant has repeatedly underlined that technology can be an ally, but “justice will always remain a profoundly human enterprise,” and that AI may assist the system but cannot replace the lawyer or the judge. Former Chief Justice D Y Chandrachud has spoken in the same register, stressing that AI can enhance efficiency but cannot replace human judgment, while also warning that technology is not neutral and can reflect social prejudices that already exist.

These cautions tied to what courts are already seeing. Justice B V Nagarathna’s reference to a fictitious “Mercy vs Mankind” citation has become a shorthand for a real and emerging problem where lawyers may be outsourcing verification to machines, and then forcing judges to spend time checking authorities that do not exist. AI can draft, but it cannot take responsibility. That remains human, and it must remain human.

Within this climate, Hitesh Jain’s recent remarks sit less like a standalone view and more like a professional articulation of where the mainstream is heading. His central argument is that India’s AI governance should be outcome based and risk proportionate, focused on tangible harms rather than the underlying mathematics. His framing turns the typical regulatory posture on its head. The question is not “what could go wrong?” but “what must we design so things go right?” In legal terms, it is an invitation to regulate effects, not ideas.



That approach also aligns with the MANAV vision articulated by the Prime Minister, which places human welfare, dignity, and ethical responsibility at the centre of technological growth. The overlap becomes visible when one traces the common emphasis on guardrails, proportionality, and human oversight. The MANAV lens says AI should expand human capability without shrinking human agency. Jain’s lens says the same thing in professional language, use AI for scale and routine, but reserve judgment, strategy, ethics, and empathy for the lawyer, because those are not add ons, they are the work.

Jain’s preference for agile governance also finds an echo in judicial thinking about how courts should adopt AI. Justice Sanjay Karol has framed the challenge as not whether to use AI, but how to use it in line with constitutional values like fairness, transparency, and accountability. These references reinforce a key legal insight regarding how, in a constitutional democracy, efficiency is not the end goal and is only legitimate when it serves due process.

Even where jurists have sounded sceptical, the scepticism is not anti technology, it is pro judgment. Justice Devan Ramachandran has emphasised that AI cannot sit above human intellect in conflicts driven by emotion and lived experience, and has linked the AI moment to a larger crisis of verification in the age of deepfakes and viral misinformation. Justice M Sundar, while recognising AI’s utility as a research aid, has cautioned that it lacks emotional and constitutional reasoning and has warned against hallucinated citations entering legal work. Together, these remarks describe a judiciary trying to protect the integrity of legal reasoning while acknowledging that tools will change.

Jain’s most striking observation, however, may be about what AI does to hierarchy. If AI democratises competence, then small firms and lawyers outside elite networks can access research and drafting assistance that once depended on expensive institutional support. That promise matches the MANAV claim that technology should widen inclusion. Yet the judiciary’s warnings also supply the necessary counterweight, competence without responsibility is not progress. If everyone has access to tools, excellence will be defined by discernment, not by outputs.

The emerging Indian consensus in law is neither panic nor cheerleading but a disciplined attempt to keep the human at the centre, not as a slogan, but as a liability principle, an ethical duty, and a constitutional necessity. AI will be used. The legal system is already there. The real question is whether the profession will use it with the seriousness the law demands, or whether it will treat automation as a substitute for the one thing it cannot automate, which is accountability.

Tags:    

Similar News