OpenAI Considered Acquisition of Google Chrome, Executive Reveals During Antitrust Trial

An OpenAI executive revealed during a high-profile antitrust trial that the company would be interested in acquiring Google’s Chrome browser—if regulators succeed in forcing Alphabet to divest it. The disclosure was made Tuesday in Washington, where the U.S. Department of Justice is pressing its case against Google’s dominance in the online search market.

Nick Turley, head of product for ChatGPT, made the statement while testifying at the trial. The DOJ is seeking sweeping remedies to restore competition, arguing that Google has unfairly cemented its monopoly in the search industry through exclusive agreements and platform bundling.

Although Google has never offered Chrome for sale, the judge presiding over the case ruled last year that the tech giant does indeed hold a monopoly in search and related advertising. Google, for its part, has denied wrongdoing and is preparing to appeal the decision, maintaining that its products are chosen by users on merit.

The trial, which is being closely watched by the tech industry, also offers a window into the growing rivalry in generative AI. Prosecutors argued in their opening remarks that Google’s dominance in search could give it an unfair head start in artificial intelligence, allowing it to use its AI tools to further direct users back to its core search platform—tightening its grip on the market even more.

Character AI Launches AvatarFX Model That Generates Consistent Videos from Images

Character AI, a California-based AI platform, has introduced its first video generation model, named AvatarFX, which can convert images into 2D and 3D animated videos. The company claims that the videos generated by AvatarFX will maintain temporal consistency, ensuring that elements like facial expressions, hand, and body movements remain smooth and coherent across frames. The videos will also incorporate speech, powered by Character AI’s native text-to-speech (TTS) models. AvatarFX is expected to be released in the coming months, with paid subscribers gaining early access to the tool.

AvatarFX marks a significant expansion for Character AI, which has primarily focused on text and image-based models in the past. With this new model, the company ventures into the realm of AI-generated videos, allowing users to create animated characters that can move and speak. However, unlike most video generation models, AvatarFX will not generate realistic human characters. Instead, it focuses on 2D and 3D cartoon characters, as well as non-human faces. The goal is to provide users with a tool that allows for more creative and controlled video generation.

A key feature of AvatarFX is its emphasis on temporal consistency. This means that the generated videos will ensure the continuity of movement, with facial expressions, hand, and body gestures remaining fluid between frames. The company asserts that this model will significantly reduce issues like glitches or inconsistencies, such as extra limbs or distorted facial expressions, which can often occur in AI-generated video. While these claims sound promising, the true capabilities of AvatarFX can only be confirmed once the model is officially released.

One important distinction of AvatarFX is that it will not generate videos based on text inputs. Instead, the model accepts images as its sole input. Character AI believes that this approach will allow users to have better control over the video generation process, ensuring that the resulting videos are closer to the user’s vision. The inclusion of speech, powered by the company’s TTS models, adds another layer of realism to the animated content, making it more engaging and dynamic. This move signals Character AI’s push to enhance the way we create and interact with AI-generated videos, offering new possibilities for animation and storytelling.

Meta’s Oversight Board Criticizes Company for Policy Overhaul Decisions

Meta Platforms’ Oversight Board has issued a strong rebuke to the company over a policy overhaul implemented in January, which reduced fact-checking efforts and relaxed restrictions on discussions surrounding sensitive issues like immigration and gender identity. The board, which operates independently but is funded by Meta, expressed concerns that the changes were made too quickly and without adequate transparency or human rights due diligence. These modifications, announced just before the start of U.S. President Donald Trump’s second term, have raised alarms about their potential to worsen harmful content on Meta’s platforms.

The Oversight Board criticized Meta for making the policy changes “hastily” and without following the usual procedures. The board emphasized the need for the company to assess the “potential adverse effects” these changes could have, particularly in terms of their impact on social discourse and human rights. This public reprimand highlights a growing tension between Meta’s leadership, particularly CEO Mark Zuckerberg, and the Oversight Board, which has been increasingly scrutinizing the company’s decisions. Zuckerberg, who has been working to repair his relationship with Trump, is under pressure as he scales back measures aimed at limiting the spread of hate speech, misinformation, and violence on his platforms.

As part of its ongoing evaluations, the Oversight Board recently issued its first rulings on individual content cases since the January policy changes. In some instances, the board upheld Meta’s decisions to leave up controversial content, such as posts discussing transgender people’s access to bathrooms. In other cases, however, the board ruled that Meta must remove posts containing racist slurs, underscoring the complex balance the company must strike between protecting free expression and addressing harmful content.

Meta responded to the board’s rulings with a statement that highlighted its approval of decisions that supported free speech by leaving up or restoring certain content. However, the company did not directly address the board’s rulings that required content removal. This ongoing debate reflects the broader challenges that Meta faces in managing content moderation, especially as the company navigates the delicate intersection of freedom of expression and the need to protect users from harmful and discriminatory speech.