Google fails to resolve EU antitrust dispute over search result bias

Google said it has been unable to resolve disagreements with major travel and search service providers — including Skyscanner and Booking.com — over how it presents search results, leaving the company exposed to a potential European Union antitrust fine. The disclosure follows a two-day workshop (July 7–8) hosted by the European Commission, where Google presented its latest proposals to address long-standing allegations that it favors its own services like Google Flights, Hotels, and Shopping over rivals.

Under the EU’s Digital Markets Act (DMA), which aims to curb the dominance of “gatekeeper” platforms, violations can trigger fines of up to 10% of global annual revenue — a serious threat for Alphabet, Google’s parent company.

At the workshop, Google offered two new options: Both would give vertical search competitors (like Skyscanner, Kelkoo, and Booking.com) a box at the top of the results page, while listings for individual providers such as airlines, hotels, and restaurants would appear underneath. However, critics argue the proposals still tilt in Google’s favor.

Skyscanner CEO Bryan Batista said the latest suggestions risk “misleading consumers and cementing Google’s position” in organic search. Meanwhile, lawyer Thomas Hoppner — who represents complainants against Google — criticized the company for deflecting blame onto tensions between intermediaries and direct service providers instead of addressing its own alleged self-preferencing behavior.

Google’s Director of Competition, Oliver Bethell, acknowledged the conflict in a LinkedIn blog post, saying: “Competing interests continue to pull us in different directions.” He added that while feedback was welcomed, it’s time to conclude the debate, emphasizing that Google must act in the interest of broader users, not just a few commercial parties.

The European Commission is expected to make a final judgment on Google’s compliance in the coming months. Should regulators find the company in breach, it could trigger one of the most significant enforcement actions yet under the DMA.

U.S. evaluates Chinese AI for ideological alignment with Communist Party

The U.S. government has quietly launched a program to assess Chinese AI models for ideological bias, particularly their alignment with the Chinese Communist Party’s (CCP) official narratives, according to an internal memo reviewed by Reuters. The joint effort by the State and Commerce Departments involves feeding standardized questions in Chinese and English to Chinese-developed language models and grading their responses for signs of political conformity and censorship.

This marks the first known formal U.S. attempt to systematically evaluate the political alignment of foreign AI tools. The memo shows that AI systems like Alibaba’s Qwen 3 and DeepSeek’s R1 were among those tested. Analysts measured how directly the models addressed sensitive queries and whether their answers echoed Beijing’s stances — such as support for China’s South China Sea claims or avoiding discussion of the 1989 Tiananmen Square crackdown.

The findings reportedly show that Chinese AI tools were significantly more likely than Western counterparts to produce responses that align with CCP messaging. For example, DeepSeek’s model consistently praised “stability and social harmony” — standard rhetoric used by the Chinese government — when asked about controversial topics.

The memo also notes a trend of increasing censorship in newer versions of Chinese models, suggesting that developers are actively fine-tuning their systems to reflect state ideology more accurately. The review did not include comment from Alibaba or DeepSeek, and both companies declined to respond to Reuters’ inquiries.

China has openly stated that its AI systems are designed to align with “core socialist values” and ensure national ideological security. In an email response, Chinese Embassy spokesperson Liu Pengyu said China is building an AI governance system that balances “development and security,” but he did not address the specific findings.

The U.S. may eventually release its evaluations publicly to draw attention to what officials view as an emerging risk: that widespread adoption of ideologically skewed AI could serve as a subtle form of global influence or propaganda.

This concern is not limited to China. U.S.-based AI systems have also faced criticism for political and ethical alignment. Elon Musk’s Grok AI model recently came under fire after it began posting antisemitic content and conspiracy theories on X (formerly Twitter), prompting an apology and a content review. On the heels of this controversy, X CEO Linda Yaccarino abruptly announced her resignation this week, though no formal reason was provided.

As global AI competition intensifies, the ideological underpinnings of AI models — and their potential to shape public discourse — are becoming a flashpoint in the broader U.S.-China tech rivalry.

TikTok building U.S.-only app with separate algorithm and data systems

TikTok is developing a standalone U.S. version of its platform, complete with a distinct algorithm and data system, to comply with U.S. legislation that mandates the divestment of its American operations. The project, internally known as “M2,” aims to meet a September deadline and could clear the path for a potential sale of TikTok’s U.S. business, Reuters reports, citing employees with direct knowledge.

The move involves duplicating TikTok’s codebase — including AI models, algorithms, features, and U.S. user data — from its global app to an independent U.S.-specific version. It is TikTok’s most ambitious technical separation effort to date and would represent the deepest structural divide between ByteDance’s U.S. and international operations. The U.S.-only version would function much like Douyin, TikTok’s China-specific app, and would not be visible to users outside the U.S.

The initiative responds to the 2024 law requiring ByteDance to divest TikTok or face a ban, amid long-standing U.S. concerns about data privacy and national security. While content from the current app is expected to carry over, the new recommendation engine will be trained solely on U.S. user data. This is expected to shift content visibility toward American creators and possibly limit international reach for non-U.S. influencers.

Sources revealed that since January, TikTok has been removing non-U.S. user data from Oracle’s American data centers to comply with separation demands. Meanwhile, ByteDance has worked on splitting its algorithm’s codebase — a move it previously denied.

If the technical split is completed, U.S. operations would be managed independently of TikTok’s global team, although ByteDance engineers might remain involved on a limited basis. This has raised internal questions about whether the U.S. algorithm will retain its effectiveness without access to ByteDance’s global engineering expertise.

A potential sale would involve a joint venture including American investors such as Susquehanna International Group, General Atlantic, KKR, and possibly Oracle, along with new players like Blackstone and Andreessen Horowitz. ByteDance would retain a minority stake. However, Beijing’s approval remains uncertain due to China’s export restrictions on recommendation algorithms, a key concern in the stalled 2020 negotiations.

The separation effort is unfolding against a broader backdrop of U.S.-China trade tensions. While former President Donald Trump said last week he would resume discussions with China over the sale, he admitted uncertainty over Beijing’s cooperation, adding, “I think the deal is good for China and it’s good for us.”