X Commits to Safeguarding EU Users’ Data from AI Training Use

X’s Legal Team to Challenge Suspension Order with Opposition Filing by September 4

Social media platform X, formerly known as Twitter, has agreed not to use the personal data of European Union (EU) users for training its artificial intelligence (AI) systems, at least temporarily. This decision came to light during a hearing in an Irish court on Thursday, following concerns raised by the Data Protection Commission (DPC) of Ireland. The court was informed that X would refrain from processing such data for AI purposes until users had a clear choice to withdraw their consent. The agreement marks a significant step in the ongoing scrutiny of data practices by major tech firms operating in the EU, where strict privacy laws demand transparent and ethical handling of user information.

The Irish Data Protection Commission, which serves as the leading regulator in the EU for major U.S. tech companies due to their operational bases in Ireland, initiated a request earlier this week to suspend or restrict X’s data processing activities. The Commission sought to prevent the platform from using personal data of EU users to develop, train, or refine its AI systems without adequate user consent. This move underscores the increasing regulatory pressure on tech giants like X to comply with the EU’s General Data Protection Regulation (GDPR), which mandates explicit consent for processing personal data, especially for new uses like AI training.

X, owned by Elon Musk, defended its practices by stating that all users have the option to decide whether their public posts can be used by the platform’s AI chatbot, Grok. To opt out, users are required to untick a specific box in their privacy settings. However, this opt-out option was only introduced after the company had already begun processing user data for AI purposes. The court heard that the platform started using EU users’ data for AI training on May 7, but only introduced the option to opt out on July 16, raising concerns about a lack of transparency and potential non-compliance with EU data protection laws.

 

 

Judge Leonie Reynolds, who presided over the hearing, noted the discrepancy in X’s handling of user data. She pointed out that the platform began processing data without prior consent and that the opt-out feature was not initially available to all users. This inconsistency has led to questions about the platform’s commitment to user privacy and compliance with the GDPR. Judge Reynolds emphasized the need for clarity and compliance, especially given the sensitive nature of personal data and its use in AI technologies.

In response to the concerns raised, a lawyer representing X informed the court that the platform would halt the use of any data collected from EU users between May 7 and August 1 for AI training purposes. This suspension will remain in place until the court reaches a decision regarding the Irish DPC’s order. The lawyer’s statement indicates that X is taking a cautious approach, likely in an effort to avoid further regulatory penalties and to maintain its standing within the EU market.

The case highlights the broader challenges tech companies face in balancing innovation with regulatory compliance. As AI technologies continue to evolve, the collection and use of personal data for training AI systems are becoming more scrutinized. The outcome of this legal battle could set a precedent for how companies must navigate data privacy laws while leveraging user data for technological advancements. For X, complying with the court’s decision and the DPC’s requirements will be crucial in maintaining trust among EU users and adhering to one of the world’s most stringent data protection regimes.