Yazılar

X Adds Blue Checkmark Disclaimer to Address EU Antitrust Probe

Elon Musk’s social media platform X has added a more prominent disclaimer to its blue checkmark feature, aiming to deflect a potential fine from European Union antitrust regulators, according to a source familiar with the matter.

The European Commission charged X in July 2023 with misleading users about the meaning of the blue checkmark. Traditionally, the badge indicated that an account belonged to a verified public figure. However, following Musk’s acquisition of the platform in 2022, the checkmark began to signify only that an account holder was a paid subscriber, not necessarily a verified identity.

Although X has not admitted any wrongdoing, it recently began displaying a more noticeable disclaimer clarifying the meaning of the blue checkmark. According to the source, this move is not part of any formal settlement proposal with the EU’s tech enforcement body but is seen as a voluntary step to demonstrate compliance. The new disclaimer has been in place for about a week.

The European Commission acknowledged X’s decision, with a spokesperson stating: “Our investigation related to the blue checkmark is ongoing.” X declined to comment when contacted.

The probe is being conducted under the EU’s Digital Services Act (DSA), which mandates that large online platforms take stronger action against illegal or harmful content or face penalties of up to 6% of their global annual revenue. The DSA also requires transparency in how online platforms present information to users.

Bloomberg first reported on X’s decision to highlight the disclaimer.

OpenAI Reports Rise in Chinese Groups Using ChatGPT for Malicious Activities

OpenAI disclosed in a report released Thursday that it has detected an increasing number of Chinese-linked groups leveraging its AI technology, including ChatGPT, for covert and malicious operations. Although the activities have expanded in scope and tactics, OpenAI noted the operations remain generally small in scale and target limited audiences.

Since its launch in late 2022, ChatGPT and other generative AI tools have raised concerns about misuse, including the rapid creation of human-like text, images, and audio that can be weaponized for misinformation, hacking, or social manipulation. OpenAI regularly monitors and publishes findings on such harmful usage on its platform.

Among the examples cited by OpenAI:

  • Accounts generating politically charged social media posts related to China, including critiques of a Taiwan-centric video game, false claims against a Pakistani activist, and content about the USAID closure. Some posts also criticized U.S. President Donald Trump’s tariffs with messages such as “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”

  • Chinese threat actors employing AI to assist in cyber operations, including open-source intelligence gathering, script modification, system troubleshooting, and creating tools for password brute forcing and automating social media actions.

  • Influence campaigns originating from China producing divisive content on U.S. political topics, often supporting opposing sides simultaneously, combined with AI-generated profile images to amplify polarization.

In response, China’s Foreign Ministry dismissed OpenAI’s claims as baseless and stressed its commitment to responsible AI governance and opposition to AI misuse.

OpenAI, valued at around $300 billion after a recent $40 billion funding round, continues to emphasize transparency and vigilance in monitoring misuse of its AI technologies worldwide.

Meta to Require AI Disclosure for Political Ads Ahead of Canadian Elections

Meta Platforms (META.O) announced on Thursday that it will require advertisers to disclose the use of AI or other digital techniques in political or social issue ads ahead of Canada’s federal elections. This move aims to combat misinformation and increase transparency in the political advertising landscape.

The new disclosure rule will apply to ads featuring photorealistic images, videos, or realistic-sounding audio that have been digitally altered to show a real person saying or doing something they did not actually say or do. It will also apply to ads showcasing non-existent individuals or fabricated events, altered footage of real events, or misleading depictions of events that may not be accurate.

In November 2023, Meta extended its ban on new political ads following the U.S. election to combat misinformation. The company also prohibited political campaigns and advertisers in regulated sectors from using its generative AI advertising tools. Despite these efforts, Meta had a setback earlier this year when it scrapped its U.S. fact-checking programs amid pressure from conservatives to overhaul its approach to political content.

Additionally, Meta has introduced a feature allowing users to disclose when they share AI-generated content, enabling the platform to label such media accordingly.