Yazılar

LinkedIn Lawsuit Over Customer Data Use for AI Models Dismissed

A class action lawsuit against Microsoft’s LinkedIn, which accused the platform of using customers’ private messages to train artificial intelligence models, has been dismissed. The case was dropped by plaintiff Alessandro De La Torre on Thursday in the U.S. federal court in San Jose, California, just days after the suit was filed. LinkedIn had argued that the allegations were unfounded.

De La Torre’s lawsuit claimed that LinkedIn violated the privacy of its Premium users by disclosing their private messages to third parties involved in developing AI. He accused the platform of breaching its promise to use customer data only to enhance its services, not for external uses like AI training.

The issue came to light when LinkedIn updated its privacy policy in September, revealing that a new account setting would not affect data used in previous AI training. This disclosure sparked concerns among users about how their data was being handled.

However, LinkedIn clarified that it had not shared private messages with third parties for AI training. In a LinkedIn post, Sarah Wight, the company’s vice president and legal counsel, confirmed, “We never did that.” De La Torre’s legal team acknowledged the clarification, stating that users could take comfort in knowing their private messages had not been used for AI purposes.

Taiwan Bans Government Use of DeepSeek AI Over Security Concerns

Taiwan’s Ministry of Digital Affairs announced on Friday that government departments are prohibited from using DeepSeek, a Chinese artificial intelligence (AI) service, citing national security risks. The ministry warned that DeepSeek’s operations involve cross-border data transmission, raising concerns about potential information leaks.

Given Beijing’s sovereignty claims over Taiwan and ongoing political and military tensions, Taiwanese authorities remain cautious about Chinese technology. The digital ministry emphasized that it will continue monitoring technological developments and adjust cybersecurity policies as necessary to safeguard national security.

This development follows similar concerns raised internationally. South Korea’s information privacy watchdog has stated plans to question DeepSeek regarding its data handling practices. Meanwhile, regulatory authorities in France, Italy, and Ireland are also examining the company’s use of personal information.

DeepSeek’s rapid rise has sparked global scrutiny. By Monday, its free AI assistant had surpassed OpenAI’s ChatGPT in downloads from Apple’s App Store. The surge in DeepSeek’s popularity coincided with a sharp decline in U.S. tech stocks, leading to a record $593 billion market value loss for Nvidia in a single day.

 

French Privacy Watchdog to Investigate DeepSeek Over AI and Data Protection

France’s data privacy authority, the CNIL, announced on Thursday that it will question DeepSeek to assess the workings of its AI system and potential privacy risks for users. The Chinese AI startup gained international attention after revealing that training its DeepSeek-V3 model required less than $6 million in Nvidia H800 computing power.

A CNIL spokesperson confirmed that its AI department is currently analyzing DeepSeek’s tool and will engage with the company to understand its system and data protection measures. The French regulator is among the most active in Europe, having previously fined tech giants like Google and Meta for privacy violations.

DeepSeek is also under scrutiny in other parts of Europe. Italy’s data protection authority recently requested details on its handling of personal data, while Ireland’s Data Protection Commission has inquired about data processing practices related to Irish users.

The European Union maintains strict privacy protections under its General Data Protection Regulation (GDPR), widely regarded as one of the world’s most comprehensive data privacy laws. GDPR violations can result in fines of up to 4% of a company’s global revenue. Additionally, new EU AI regulations impose transparency obligations on high-risk AI models, with penalties ranging from 7.5 million euros (or 1.5% of turnover) to 35 million euros (or 7% of global turnover), depending on the severity of violations.

As regulatory scrutiny intensifies, DeepSeek faces mounting pressure to demonstrate compliance with European data protection standards.