New Delhi: India is ready to adopt a techno-legal approach to AI governance, the Principal Scientific Adviser (PSA) to the government, Prof Ajay Kumar Sood, said on Monday.
Addressing a high-level roundtable conference here, Prof Sood highlighted the need to embed legal and regulatory principles directly into AI systems to ensure accountability, transparency, data privacy, and cybersecurity by design.
He urged participants to evaluate all plausible ways of creating a techno-legal governance framework.
The roundtable, an official pre-summit event ahead of the India AI Impact Summit 2026, was organised by the Office of the Principal Scientific Adviser, in collaboration with the iSPIRT Foundation and the Centre for Responsible AI (IIT Madras), to discuss “Techno-Legal Regulation for Responsible, Innovation-Aligned AI Governance”.
The experts highlighted the need for robust data privacy and consent mechanisms across AI training, inference, and deployment, convergence with the DEPA framework, and the adoption of compliance-by-design architectures to support the global scalability of Indian AI governance models. The discussions also addressed regulatory responses to non-deterministic AI systems and AI-generated content, including copyright concerns, while underscoring the challenges of operationalising techno-legal frameworks for AI governance. Participants emphasised that AI model robustness must be balanced against technical and socio-economic trade-offs, and that emerging solutions should be practical, accessible, and consumable at the end-user level.
The discussions underscored the need to develop a standardised evaluation framework for responsible AI across the full lifecycle of AI systems, translate these insights into effective policy levers, and embed safety and governance measures directly into AI technology stacks to mitigate risks and promote equitable access.
The roundtable was attended by Preeti Banzal, Adviser/Scientist ‘G’, Office of PSA; Kavita Bhatia, Scientist ‘G’ and Group Coordinator, Ministry of Electronics and Information Technology; Hari Subramanian, Volunteer, iSPIRT Foundation, and Co-founder & CEO, Niti AI; and Prof. Balaraman Ravindran, Head, Centre for Responsible AI, IIT Madras.
Banzal noted that the insights derived from the roundtable will contribute to the Safe and Trusted AI Chakra of the India AI Impact Summit 2026, supporting the development of a pro-innovation, trustworthy AI ecosystem, and strengthening India’s role in global AI governance. Banzal also spoke about India’s approach to techno-legal regulation, emphasising the importance of having a practical implementation mechanism and setting exemplary pathways to AI Governance, enabling policy mechanisms, capacity building, and global cooperation.
The co-moderators, Subramanian and Ravindran, discussed key challenges and metrics, including data protection, leakage risks, differential privacy, accuracy, and throughput, noting the trade-offs between privacy and system performance. They underlined the importance of equity in access, data sovereignty, and broader economic and strategic considerations.
Prof. Mayank Vatsa, Professor, IIT Jodhpur; Jhalak Kakkar, Director, Centre for Communication Governance, National Law University, Delhi; Abilash Soundararajan, Founder & CEO, PrivaSapien, and other subject-matter experts were also present at the roundtable.
The Office of PSA will also be releasing an explanatory white paper on Techno-Legal Regulation for AI Governance, incorporating all the suggestions and recommendations made in the roundtable.
(IANS)












