Huawei unveils censorship-optimized DeepSeek model with Zhejiang University

Huawei announced it has co-developed a new safety-focused version of DeepSeek, trained to block politically sensitive or harmful content in line with Chinese government regulations requiring AI to reflect “socialist values.” The model, named DeepSeek-R1-Safe, was trained using 1,000 Huawei Ascend AI chips in partnership with Zhejiang University, though DeepSeek and its founder Liang Wenfeng were not directly involved.

The move underscores how China’s AI industry is embracing and modifying DeepSeek’s open-source R1 model, which stunned global markets earlier this year for its sophistication and low reported training costs. Chinese companies like Baidu have already adopted strict filtering in their AI chatbots, such as Ernie Bot, to avoid sensitive political topics.

Huawei claimed DeepSeek-R1-Safe achieved “nearly 100% success” in blocking toxic content, politically sensitive discussions, and incitement to illegal activities. However, the success rate dropped to 40% under disguised prompts, such as role-play or coded inputs. On average, its comprehensive defense rate was 83%, outperforming competitors like Alibaba’s Qwen-235B and the larger DeepSeek-R1-671B by 8–15%.

Huawei said the new model maintained strong performance, with less than a 1% drop compared to the original DeepSeek-R1, despite the added safety layers.

The launch comes during Huawei’s annual Connect conference in Shanghai, where the company also revealed detailed chipmaking roadmaps and new computing power initiatives—part of China’s broader effort to reduce reliance on U.S. technologies while aligning AI systems with domestic political and social controls.