Yazılar

Amazon Faces Lawsuit Over Alleged Secret Consumer Tracking via Cellphones

Amazon is facing a lawsuit accusing the company of secretly tracking consumers through their cellphones and profiting from the data it collects. Filed on Wednesday in a San Francisco federal court, the proposed class-action suit claims that the retail giant gained unauthorized access to users’ location data without their knowledge or consent. The lawsuit raises concerns about privacy violations and the extent to which tech companies can collect and monetize personal information.

According to the complaint, Amazon allegedly obtained “backdoor access” to consumer devices by embedding its Amazon Ads SDK code into tens of thousands of third-party apps. This allowed the company to collect highly detailed, timestamped geolocation data, which could reveal sensitive personal details such as where users live and work, their shopping habits, and even their religious affiliations and health concerns. The lawsuit argues that Amazon’s practices amount to “fingerprinting” consumers, creating vast profiles without their explicit permission.

The legal challenge was initiated by Felix Kolotinsky, a California resident who claims Amazon collected his personal data through the “Speedtest by Ookla” app on his phone. The lawsuit suggests that many consumers may have unknowingly shared their information in a similar manner, highlighting the growing debate over digital privacy and data security. If proven, these allegations could further fuel regulatory scrutiny of Amazon’s data collection practices.

Kolotinsky’s complaint accuses Amazon of violating California’s penal code and state laws against unauthorized computer access. The lawsuit seeks unspecified damages on behalf of millions of Californians who may have been affected. As concerns over corporate data tracking intensify, the case could have significant implications for how companies collect and use consumer data, potentially leading to stronger privacy protections in the future.

South Korea Blocks DeepSeek Amid Security Concerns, Following Global Warnings

South Korea’s industry ministry has temporarily blocked employee access to the Chinese artificial intelligence startup DeepSeek due to security concerns, marking the latest move by governments to restrict the use of certain AI services. A ministry official confirmed on Wednesday that the ban was implemented in response to growing apprehension surrounding generative AI technologies.

On Tuesday, the South Korean government issued a notice urging caution among ministries and agencies regarding the use of AI services such as DeepSeek and ChatGPT in work-related tasks. The notice followed earlier actions by state-run entities, with Korea Hydro & Nuclear Power confirming it had blocked access to DeepSeek earlier this month.

The country’s defense ministry also took action, blocking access to DeepSeek on military computers, while the foreign ministry restricted its use on devices connected to external networks, according to Yonhap News Agency. However, the foreign ministry did not provide further details regarding the specific security measures taken.

DeepSeek, which was not immediately available for comment, joins a growing list of companies facing scrutiny over potential security risks. Both Australia and Taiwan have recently banned the AI service from government devices, citing similar security concerns. In January, Italy’s data protection authority ordered DeepSeek to block its chatbot after the company failed to address privacy issues raised by regulators.

In addition to government actions, private companies in South Korea are also taking precautions. Kakao Corp, a major South Korean chat app operator, instructed employees to refrain from using DeepSeek due to security fears, particularly following its partnership with OpenAI. Other South Korean tech giants, including SK Hynix and Naver, have also restricted or limited access to generative AI services, citing concerns about data security and privacy.

The scrutiny of DeepSeek follows the company’s claim that its AI models are on par with or superior to products developed in the U.S., while being significantly cheaper to produce. South Korea’s information privacy watchdog has announced plans to inquire with DeepSeek about its user data management practices, adding another layer of regulatory attention on the Chinese startup.

 

EU Announces Guidelines to Prevent AI Misuse by Employers, Websites, and Police

The European Commission unveiled new guidelines on Tuesday aimed at curbing the misuse of artificial intelligence (AI) in various sectors, including employment, online services, and law enforcement. As part of the European Union’s broader AI regulations, the guidelines prohibit practices such as using AI to track employees’ emotions or to manipulate consumers into spending money online.

The guidelines are part of the EU’s Artificial Intelligence Act, which, while legally binding since last year, will be fully enforceable by August 2, 2026. Some provisions, such as those concerning specific AI practices, take effect earlier, including the ban on deceptive AI practices from February 2 this year.

Prohibited practices under the guidelines include the use of AI to create “dark patterns” on websites designed to manipulate users into making financial commitments, as well as AI applications that exploit individuals based on factors like age, disability, or socio-economic status. Additionally, social scoring systems that use personal data, such as race or origin, to categorize individuals are banned, alongside the use of biometric data by police to predict criminal behavior without proper verification.

Employers are also restricted from using surveillance tools like webcams or voice recognition systems to monitor employees’ emotions. The guidelines further prohibit the use of mobile CCTV cameras equipped with facial recognition for law enforcement, except under strict conditions with safeguards in place.

The EU has given member countries until August 2 to designate market surveillance authorities to enforce these AI rules. Companies found in violation could face hefty fines ranging from 1.5% to 7% of their global revenue. This comprehensive regulatory framework contrasts with the United States’ voluntary compliance approach and China’s focus on maintaining social stability through state-controlled AI.