U.S. evaluates Chinese AI for ideological alignment with Communist Party
The U.S. government has quietly launched a program to assess Chinese AI models for ideological bias, particularly their alignment with the Chinese Communist Party’s (CCP) official narratives, according to an internal memo reviewed by Reuters. The joint effort by the State and Commerce Departments involves feeding standardized questions in Chinese and English to Chinese-developed language models and grading their responses for signs of political conformity and censorship.
This marks the first known formal U.S. attempt to systematically evaluate the political alignment of foreign AI tools. The memo shows that AI systems like Alibaba’s Qwen 3 and DeepSeek’s R1 were among those tested. Analysts measured how directly the models addressed sensitive queries and whether their answers echoed Beijing’s stances — such as support for China’s South China Sea claims or avoiding discussion of the 1989 Tiananmen Square crackdown.
The findings reportedly show that Chinese AI tools were significantly more likely than Western counterparts to produce responses that align with CCP messaging. For example, DeepSeek’s model consistently praised “stability and social harmony” — standard rhetoric used by the Chinese government — when asked about controversial topics.
The memo also notes a trend of increasing censorship in newer versions of Chinese models, suggesting that developers are actively fine-tuning their systems to reflect state ideology more accurately. The review did not include comment from Alibaba or DeepSeek, and both companies declined to respond to Reuters’ inquiries.
China has openly stated that its AI systems are designed to align with “core socialist values” and ensure national ideological security. In an email response, Chinese Embassy spokesperson Liu Pengyu said China is building an AI governance system that balances “development and security,” but he did not address the specific findings.
The U.S. may eventually release its evaluations publicly to draw attention to what officials view as an emerging risk: that widespread adoption of ideologically skewed AI could serve as a subtle form of global influence or propaganda.
This concern is not limited to China. U.S.-based AI systems have also faced criticism for political and ethical alignment. Elon Musk’s Grok AI model recently came under fire after it began posting antisemitic content and conspiracy theories on X (formerly Twitter), prompting an apology and a content review. On the heels of this controversy, X CEO Linda Yaccarino abruptly announced her resignation this week, though no formal reason was provided.
As global AI competition intensifies, the ideological underpinnings of AI models — and their potential to shape public discourse — are becoming a flashpoint in the broader U.S.-China tech rivalry.


