Yazılar

EU Presses Apple, Google and Microsoft on Efforts to Combat Financial Scams

European Union regulators have asked Apple, Google, Microsoft, and Booking.com to detail the steps they are taking to prevent their platforms from being used for financial scams, highlighting growing concern over the rising cost of online fraud.

The inquiry falls under the Digital Services Act (DSA), the EU’s sweeping legislation that requires major tech companies to take stronger action against illegal and harmful online content.

“Today, we sent requests for information, under the DSA, to Apple, Booking.com, Google and Microsoft on how they identify and manage risks related to financial scams,” EU tech chief Henna Virkkunen wrote on X.

Virkkunen warned that online fraud has become easier than ever to launch, frequently leading to significant financial losses for consumers. She noted that scams such as fake hotel listings, fraudulent banking apps, and deepfake videos of public figures promoting false investments cost Europeans more than €4 billion ($4.7 billion) each year.

Authorities worldwide have also raised alarms that AI tools could make scams like phishing and fake investment schemes more convincing and harder to detect.

The EU’s probe underscores its heightened scrutiny of Big Tech’s responsibilities in protecting users against financial crime.

Anthropic CEO Criticizes Proposed 10-Year Ban on State AI Regulation as ‘Too Blunt’

Dario Amodei, CEO of Anthropic, argued in a New York Times opinion piece that a Republican proposal to block states from regulating artificial intelligence for 10 years is an overly blunt approach. Instead, he called for a coordinated federal effort by the White House and Congress to establish transparency standards for AI companies.

Amodei warned that a decade-long moratorium on state regulations would leave a regulatory gap with “no ability for states to act, and no national policy as a backstop,” especially given how rapidly AI technology is advancing.

The proposed ban, included in former President Donald Trump’s tax cut bill, seeks to preempt recent AI laws passed in several states. However, it has faced pushback from a bipartisan coalition of attorneys general who support state-level oversight of high-risk AI applications.

Amodei recommended a federal transparency standard requiring AI developers to implement rigorous testing and evaluation policies, disclose risk mitigation plans, and publicly share how they ensure the safety of their models before release.

He noted that Anthropic, supported by Amazon, already publishes such transparency reports, and competitors like OpenAI and Google DeepMind have adopted similar practices. Amodei suggested that legislation might be necessary to maintain transparency as AI models grow more powerful and corporate incentives to disclose risks may wane.

U.S. AI Safety Institute Staff Excluded from Trump’s Paris AI Summit Delegation

The United States delegation to an artificial intelligence summit in Paris on February 10-11 will not include staff from the U.S. AI Safety Institute, according to sources familiar with Washington’s plans. Vice President JD Vance will lead the delegation, which will gather representatives from around 100 countries to discuss AI’s potential.

Attending on behalf of the White House Office of Science and Technology Policy (OSTP) are Principal Deputy Director Lynne Parker and Senior Policy Advisor for Artificial Intelligence Sriram Krishnan, an OSTP spokesperson confirmed. However, plans for officials from the Department of Homeland Security and the Department of Commerce, including the AI Safety Institute, to attend were canceled, said anonymous sources close to the situation.

The AI Safety Institute, established under former President Joe Biden, is dedicated to evaluating and mitigating AI risks and has partnerships with companies like OpenAI and Anthropic. Its future direction under the Trump administration remains uncertain, especially as the body currently lacks a director. Trump also recently revoked an AI executive order from Biden’s administration.

The decision not to include AI Safety Institute staff in the delegation may be linked to the ongoing transition at the Commerce Department, where the institute is housed, following Trump’s January 20 inauguration.

The Paris summit will focus less on AI risks compared to previous international summits held at Bletchley Park and Seoul. Nevertheless, representatives from the International Network of AI Safety Institutes, chaired by the United States, are expected to attend. U.S. delegates may still participate in network discussions, with a focus on ensuring the U.S. remains a leader in AI innovation amid China’s rapid advancements in the field.