Yazılar

xAI Co-Founder Igor Babuschkin Leaves to Launch AI Safety Investment Firm

Igor Babuschkin, co-founder of Elon Musk’s AI startup xAI, announced his departure on Wednesday to launch Babuschkin Ventures, an investment firm focused on AI safety research. Babuschkin, who previously worked at Google’s DeepMind and OpenAI, played a key role at xAI in developing foundational tools for model training and overseeing engineering across infrastructure, product, and applied AI projects.

xAI, launched by Musk in 2023 to challenge Big Tech’s AI efforts, has faced recent executive departures, including legal head Robert Keele and X CEO Linda Yaccarino. Babuschkin’s exit comes amid intense competition in the AI sector, with companies like OpenAI, Google, and Anthropic heavily investing in advanced system development.

EU Unveils Draft AI Code of Practice Focusing on Copyright and Safety for Companies

The European Commission revealed a draft code of practice on Thursday aimed at helping companies comply with the European Union’s evolving artificial intelligence regulations. The voluntary code emphasizes safeguarding copyright-protected content and implementing measures to reduce systemic risks linked to AI technologies.

Developed by 13 independent experts, the code is part of the broader EU AI regulatory framework. While signing up is optional, companies that do not join will miss out on the legal certainty offered to adherents. The rules will apply to major AI providers including Alphabet (Google), Meta (Facebook), OpenAI, Anthropic, Mistral, and others.

Under the code, signatories must publish summaries detailing the data sources used to train their general-purpose AI models. They are required to ensure that copyright-protected materials are only used appropriately, especially when employing web crawlers, and must take steps to prevent outputs that infringe copyright.

To address systemic risks, companies will also need to establish frameworks to identify and analyze potential hazards. While transparency and copyright guidelines apply to all general-purpose AI providers, specific safety and security provisions target providers of advanced models like OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Anthropic’s Claude.

The EU’s AI Act, effective since last June, imposes strict transparency rules on high-risk AI systems and lighter obligations for general-purpose AI models. It also regulates AI use in military, crime, and security contexts. The new AI rules for large language models will become legally binding on August 2, with enforcement beginning a year later for new models. Existing models will have until August 2, 2027, to comply.

Henna Virkkunen, the EU’s technology commissioner, encouraged AI stakeholders to adopt the code, highlighting its collaborative design and its role in simplifying compliance with the EU AI Act. The code’s final approval by EU member states and the Commission is expected by the end of the year.

Anthropic CEO Criticizes Proposed 10-Year Ban on State AI Regulation as ‘Too Blunt’

Dario Amodei, CEO of Anthropic, argued in a New York Times opinion piece that a Republican proposal to block states from regulating artificial intelligence for 10 years is an overly blunt approach. Instead, he called for a coordinated federal effort by the White House and Congress to establish transparency standards for AI companies.

Amodei warned that a decade-long moratorium on state regulations would leave a regulatory gap with “no ability for states to act, and no national policy as a backstop,” especially given how rapidly AI technology is advancing.

The proposed ban, included in former President Donald Trump’s tax cut bill, seeks to preempt recent AI laws passed in several states. However, it has faced pushback from a bipartisan coalition of attorneys general who support state-level oversight of high-risk AI applications.

Amodei recommended a federal transparency standard requiring AI developers to implement rigorous testing and evaluation policies, disclose risk mitigation plans, and publicly share how they ensure the safety of their models before release.

He noted that Anthropic, supported by Amazon, already publishes such transparency reports, and competitors like OpenAI and Google DeepMind have adopted similar practices. Amodei suggested that legislation might be necessary to maintain transparency as AI models grow more powerful and corporate incentives to disclose risks may wane.