OpenAI Warns Delhi High Court that ChatGPT Data Removal Could Violate US Legal Obligations
OpenAI has raised concerns in a legal filing with an Indian court, arguing that any order to remove training data used to power its ChatGPT service would conflict with its legal obligations under U.S. law. This filing, which was reviewed by Reuters, underscores the complexities that arise when international legal frameworks intersect with rapidly evolving AI technology. The company contends that complying with such an order would not only disrupt its operations but could also put it in violation of established U.S. laws regarding data usage and intellectual property.
In addition to its concerns about legal conflicts, OpenAI has asserted that Indian courts lack jurisdiction over the matter brought forward by ANI, a local news agency. The case, which was filed in November 2024, accuses OpenAI of using ANI’s published content without permission to train ChatGPT. OpenAI’s position is that, given its lack of a physical presence in India, the case does not fall under the jurisdiction of Indian courts, thus questioning the legal grounds of ANI’s claims in the region.
The lawsuit against OpenAI in Delhi represents one of the most significant legal challenges faced by AI companies in India. ANI is seeking both damages and the removal of its data from OpenAI’s systems, a demand that has sparked considerable debate about the use of publicly available data in training AI models. The legal dispute also highlights the global tension surrounding intellectual property rights in the age of artificial intelligence, with many prominent copyright holders beginning to scrutinize how their content is utilized without consent.
This case is part of a broader wave of litigation targeting AI companies, particularly over allegations of copyright infringement. Similar lawsuits have emerged globally, including a high-profile case filed by the New York Times against OpenAI in the United States. Despite the growing number of legal challenges, OpenAI has consistently defended its practices, arguing that its AI models rely on fair use of publicly available information to enhance their capabilities. The outcome of these cases could have far-reaching implications for how AI systems are trained and the future of intellectual property law in the digital age.



