India Tightens Regulations on AI Deployment, Requiring Government Approval for Launch

India Tightens Regulations on AI Deployment, Requiring Government Approval for Launch

India has taken a significant step in the global debate surrounding Artificial Intelligence (AI) regulation by issuing an advisory mandating that major tech firms obtain government approval before introducing new AI models. The Ministry of Electronics and IT (MeitY) released the advisory late last Friday, directing significant tech companies to seek explicit permission from the government before rolling out AI models in India.

Although the advisory is not legally binding, it signals a significant shift in India’s approach to AI regulation, as highlighted by Rajeev Chandrasekhar, India’s Deputy IT Minister. Chandrasekhar emphasized that the advisory aims to pave the way for future regulatory frameworks in the AI sector. He stated, “We are doing it as an advisory today asking you to comply with it.”

The advisory requires tech firms to ensure that their AI products and services do not exhibit bias, discrimination, or compromise the integrity of the electoral process. Additionally, companies must label any inherent unreliability or fallibility in the output generated by their AI models appropriately.

This move marks a departure from India’s previous hands-off approach to AI regulation, reflecting the government’s recognition of the importance of ensuring responsible AI deployment. Just a year ago, India had refrained from regulating AI growth, viewing the sector as strategically crucial to the nation’s interests.

However, the new advisory has raised concerns among industry executives, particularly within the Indian startup ecosystem. Many fear that stringent regulations could impede India’s competitiveness in the global AI landscape. Startups like Kisan AI’s founder, Pratik Desai, expressed disappointment and demotivation following the announcement, reflecting broader sentiments within the industry.

Furthermore, the advisory addresses recent controversies surrounding AI platforms’ responses to sensitive queries. Following an incident involving Google’s AI model Gemini, which was accused of providing biased responses regarding Indian Prime Minister Narendra Modi, the government expressed its discontent. The advisory underscores the need for AI platforms to refrain from generating biased or unreliable content.

In addition to seeking government approval, tech firms deploying generative AI models must offer their services to Indian users only after appropriately disclosing any potential unreliability or fallibility of the generated output. The advisory recommends implementing a “consent popup” mechanism to inform users explicitly about the limitations of AI-generated content.

Non-compliance with the advisory could result in penal consequences under the Information Technology Act and other relevant laws. Intermediaries are required to report their compliance within 15 days of receiving the advisory.

The Ministry’s advisory aims to foster transparency, legality, and accountability in AI deployment, safeguarding users and promoting responsible innovation in the burgeoning AI landscape.

For more such insightful news & updates around AI or Automation, explore other articles hereYou can also follow us on Twitter.

Leave a Reply

Your email address will not be published. Required fields are marked *