Google DeepMind establishes a new organization dedicated to AI safety

Google DeepMind has announced the formation of a new organization called AI Safety and Alignment, aimed at addressing concerns about the misuse of AI technology for disinformation and misleading purposes. This move comes amidst growing scrutiny from policymakers regarding the potential negative impacts of AI-generated content.

The organization will include existing teams working on AI safety, as well as new specialized cohorts of researchers and engineers. One of the notable additions is a team focused on safety around artificial general intelligence (AGI), which refers to systems capable of performing any task a human can.

This initiative mirrors similar efforts by other organizations, such as OpenAI’s Superalignment division. The new team within AI Safety and Alignment will complement DeepMind’s existing AI safety research team in London, Scalable Alignment, which is also focused on addressing technical challenges related to controlling superintelligent AI.

While Google has not disclosed specific details about the size or structure of the new organization, it is clear that the company is prioritizing AI safety as part of its broader efforts to address concerns about the ethical use of AI technology. This move may also reflect Google’s desire to demonstrate a responsible approach to AI development, particularly in light of recent controversies surrounding the misuse of AI-generated content.

AI Safety Summit: An update on our approach to safety and responsibility - Google  DeepMind

The AI Safety and Alignment organization within Google DeepMind is tasked with developing and implementing concrete safeguards for Google’s Gemini models, both current and in development. This includes a focus on preventing the dissemination of bad medical advice, ensuring child safety, and mitigating the amplification of bias and other injustices.

Anca Dragan, a former Waymo staff research scientist and UC Berkeley professor of computer science, will lead the team. Her expertise in AI safety systems, as well as her background in human-AI and human-robot interaction, positions her well to address the complex challenges in this space.

Dragan’s role in consulting with Waymo on AI safety systems may raise questions, particularly in light of recent controversies surrounding the safety of autonomous vehicles. Additionally, her dual role at UC Berkeley and DeepMind may prompt concerns about whether she can dedicate sufficient time and attention to addressing the critical issues of AGI safety and long-term risks associated with AI development.

However, Dragan emphasizes that the goal of the AI Safety and Alignment organization is to enable models to better understand human preferences and values, work collaboratively with people to address their needs, and mitigate risks associated with AI deployment. Despite potential challenges, her leadership in this area signals a commitment to advancing AI safety and promoting responsible AI development.