Yazılar

EU Accepts AliExpress Commitments to Combat Illegal Online Products

The European Commission announced on Wednesday that it has accepted binding commitments from Alibaba’s AliExpress to tackle the spread of illegal and pornographic materials on its platform. This follows a March investigation into AliExpress’s alleged failure to adequately address these concerns, which could have resulted in significant fines.

Despite the acceptance of these commitments, AliExpress may still face penalties. The Commission noted that the company underestimated the risks of disseminating illegal goods and failed to enforce sanctions against traders posting illicit content. AliExpress has the opportunity to respond to these preliminary findings.

AliExpress stated it has cooperated proactively with the Commission and remains confident that ongoing dialogue will lead to a compliant resolution.

The commitments include improvements to monitoring systems for illegal products, such as unapproved medicines, food supplements, and adult content. They also enhance transparency around advertising and recommendation algorithms, and facilitate trader traceability on the platform.

Anthropic CEO Criticizes Proposed 10-Year Ban on State AI Regulation as ‘Too Blunt’

Dario Amodei, CEO of Anthropic, argued in a New York Times opinion piece that a Republican proposal to block states from regulating artificial intelligence for 10 years is an overly blunt approach. Instead, he called for a coordinated federal effort by the White House and Congress to establish transparency standards for AI companies.

Amodei warned that a decade-long moratorium on state regulations would leave a regulatory gap with “no ability for states to act, and no national policy as a backstop,” especially given how rapidly AI technology is advancing.

The proposed ban, included in former President Donald Trump’s tax cut bill, seeks to preempt recent AI laws passed in several states. However, it has faced pushback from a bipartisan coalition of attorneys general who support state-level oversight of high-risk AI applications.

Amodei recommended a federal transparency standard requiring AI developers to implement rigorous testing and evaluation policies, disclose risk mitigation plans, and publicly share how they ensure the safety of their models before release.

He noted that Anthropic, supported by Amazon, already publishes such transparency reports, and competitors like OpenAI and Google DeepMind have adopted similar practices. Amodei suggested that legislation might be necessary to maintain transparency as AI models grow more powerful and corporate incentives to disclose risks may wane.

Meta to Require AI Disclosure for Political Ads Ahead of Canadian Elections

Meta Platforms (META.O) announced on Thursday that it will require advertisers to disclose the use of AI or other digital techniques in political or social issue ads ahead of Canada’s federal elections. This move aims to combat misinformation and increase transparency in the political advertising landscape.

The new disclosure rule will apply to ads featuring photorealistic images, videos, or realistic-sounding audio that have been digitally altered to show a real person saying or doing something they did not actually say or do. It will also apply to ads showcasing non-existent individuals or fabricated events, altered footage of real events, or misleading depictions of events that may not be accurate.

In November 2023, Meta extended its ban on new political ads following the U.S. election to combat misinformation. The company also prohibited political campaigns and advertisers in regulated sectors from using its generative AI advertising tools. Despite these efforts, Meta had a setback earlier this year when it scrapped its U.S. fact-checking programs amid pressure from conservatives to overhaul its approach to political content.

Additionally, Meta has introduced a feature allowing users to disclose when they share AI-generated content, enabling the platform to label such media accordingly.