Yazılar

OpenAI Faces Legal Battle in India Over Jurisdiction in Copyright Case

OpenAI is facing a legal challenge in India as it argues that local courts lack jurisdiction over its U.S.-based business, a stance that legal experts believe is unlikely to succeed. The case, brought by Indian news agency ANI, accuses OpenAI of copyright infringement for allegedly using its content without permission.

India, OpenAI’s second-largest market, has become a key battleground, with media groups—including those backed by billionaires Gautam Adani and Mukesh Ambani—joining ANI in opposition. While OpenAI maintains that its AI models use publicly available data in accordance with fair use principles, it is also contesting jurisdiction, citing its terms of service that specify dispute resolution in San Francisco. The company also argues that it does not maintain servers or data centers in India.

Legal experts, however, suggest that Indian courts are likely to reject OpenAI’s defense. Courts in the country have previously ruled against similar jurisdictional arguments, including in a 2022 case involving Telegram, where the Delhi High Court ruled that server location alone does not exempt a company from Indian law.

If OpenAI wins on the jurisdiction argument, it could avoid facing the copyright lawsuit in India. If it loses, it may be forced to delete ANI’s content from its training data and pay $230,000 in damages. The Delhi court is set to hear arguments on the case in February.

India has a history of holding foreign tech companies accountable to its laws, with past confrontations involving Google, Facebook, and X (formerly Twitter). The Indian government has maintained that global tech firms must comply with local regulations, reinforcing the challenge OpenAI faces in defending its position.

Amid the legal battle, OpenAI CEO Sam Altman and other senior executives are set to visit India on February 5, underscoring the market’s strategic importance.

 

French Privacy Watchdog to Investigate DeepSeek Over AI and Data Protection

France’s data privacy authority, the CNIL, announced on Thursday that it will question DeepSeek to assess the workings of its AI system and potential privacy risks for users. The Chinese AI startup gained international attention after revealing that training its DeepSeek-V3 model required less than $6 million in Nvidia H800 computing power.

A CNIL spokesperson confirmed that its AI department is currently analyzing DeepSeek’s tool and will engage with the company to understand its system and data protection measures. The French regulator is among the most active in Europe, having previously fined tech giants like Google and Meta for privacy violations.

DeepSeek is also under scrutiny in other parts of Europe. Italy’s data protection authority recently requested details on its handling of personal data, while Ireland’s Data Protection Commission has inquired about data processing practices related to Irish users.

The European Union maintains strict privacy protections under its General Data Protection Regulation (GDPR), widely regarded as one of the world’s most comprehensive data privacy laws. GDPR violations can result in fines of up to 4% of a company’s global revenue. Additionally, new EU AI regulations impose transparency obligations on high-risk AI models, with penalties ranging from 7.5 million euros (or 1.5% of turnover) to 35 million euros (or 7% of global turnover), depending on the severity of violations.

As regulatory scrutiny intensifies, DeepSeek faces mounting pressure to demonstrate compliance with European data protection standards.

 

Trump Reverses Biden’s Executive Order on AI Risk Mitigation

U.S. President Donald Trump on Monday revoked an executive order signed by former President Joe Biden in 2023, aimed at addressing the risks associated with artificial intelligence (AI). The order, which required AI developers to disclose safety test results for AI systems with potential risks to national security, the economy, public health, or safety, has been a point of contention as it came at a time when Congress had yet to pass comprehensive AI regulation.

Key Points:

  • Revocation of Biden’s Executive Order: Trump’s decision to revoke Biden’s 2023 executive order dismantles the framework that sought to introduce safety protocols for AI development. Biden’s order mandated AI developers to provide test results for high-risk AI systems to the U.S. government before they were made public, underlining concerns over national security and public safety.
  • Republican Stance on AI: The 2024 Republican Party platform had previously called for the repeal of Biden’s order, citing its potential to stifle AI innovation. The platform emphasized that AI development should be grounded in principles of free speech and human flourishing, aligning with Trump’s decision.
  • Risks and Opportunities in AI: While AI, particularly generative AI, has generated both excitement and concern due to its potential to automate tasks and disrupt industries, there are also fears about its consequences on jobs and security. The rapid advancements in AI technologies, such as those capable of creating text, images, and videos, have raised alarms about unforeseen risks.
  • Recent Developments in AI Oversight: Just last week, the U.S. Commerce Department introduced new restrictions on AI chip and technology exports, drawing backlash from tech companies like Nvidia. While Trump revoked Biden’s order concerning AI safety protocols, Biden’s administration has not yet fully relaxed its regulatory approach, as seen in a separate executive order aimed at supporting energy needs for AI data centers.