Italy Closes Probe Into DeepSeek After Commitments to Warn Users of AI “Hallucination” Risks
Italy’s antitrust authority has closed an investigation into Chinese artificial intelligence system DeepSeek after the company agreed to binding commitments aimed at improving warnings about the risk of AI-generated false information.
The probe, launched last June by Italy’s antitrust and consumer protection authority AGCM, focused on allegations that DeepSeek failed to adequately inform users that its AI system could generate inaccurate, misleading, or fabricated content — commonly referred to as “hallucinations.”
The decision to end the investigation was announced in the AGCM’s weekly bulletin published on Monday. According to the regulator, the commitments were submitted by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, which jointly own and operate the DeepSeek platform.
The agreed measures include clearer and more prominent disclosures explaining the risk that, based on user inputs, the AI model may produce outputs containing incorrect or invented information. The AGCM said the new disclosures are designed to be more transparent, intelligible, and immediately visible to users.
“The commitments presented by DeepSeek make disclosures about the risk of hallucinations easier, more transparent, intelligible, and immediate,” the authority said in its bulletin.
The case highlights growing regulatory scrutiny across Europe over how AI systems communicate their limitations to users, particularly as generative AI tools become more widely adopted in consumer-facing applications.

