India drops plan to require approval for AI model launches
The Ministry of Electronics and IT in India has backtracked on its recent AI advisory following criticism from both local and global entrepreneurs and investors. The updated advisory, shared with industry stakeholders, removes the requirement for government approval before launching or deploying AI models in the Indian market. Instead, firms are now advised to label under-tested and unreliable AI models to inform users of potential fallibility or unreliability.
This revision comes after severe backlash against India’s initial AI advisory, with critics like Martin Casado from Andreessen Horowitz calling it “a travesty.” The original advisory marked a departure from India’s previous hands-off approach to AI regulation, which less than a year ago declined to regulate AI growth, recognizing its importance to India’s strategic interests.
Although the new advisory is not legally binding, the ministry views it as indicative of the “future of regulation,” signaling a government expectation of compliance. The updated guidelines emphasize that AI models should not facilitate unlawful content under Indian law and should mitigate bias, discrimination, or threats to electoral integrity.
Intermediaries are advised to use consent popups or similar mechanisms to explicitly inform users about the unreliability of AI-generated output. While the ministry maintains a focus on identifying deepfakes and misinformation, it no longer requires firms to develop techniques to identify the originator of specific messages.
Despite the changes, the revised advisory maintains a cautious approach to AI deployment, aiming to balance innovation with regulatory oversight to protect users and ensure ethical AI use.