Italy Probes Chinese AI Firm DeepSeek Over Misinformation Risks
Italy’s antitrust and consumer protection agency AGCM announced Monday it has launched a formal investigation into Chinese artificial intelligence startup DeepSeek, alleging the company failed to clearly warn users about the potential for its chatbot to generate false or misleading information.
The regulator stated that DeepSeek’s platform does not provide “sufficiently clear, immediate and intelligible” alerts about the risk of AI-generated “hallucinations” — a term used in the AI field to describe instances when models produce inaccurate or completely fabricated information in response to user prompts.
AGCM is focusing on the consumer rights aspect, emphasizing the risk users might unknowingly rely on erroneous AI outputs due to insufficient warning or transparency.
DeepSeek did not immediately respond to requests for comment on the investigation.
This marks the second run-in DeepSeek has had with Italian authorities this year. In February, the country’s data protection regulator ordered the startup to suspend access to its chatbot within Italy after the company failed to resolve concerns related to its privacy policy.
The probe highlights increasing regulatory scrutiny over generative AI models in Europe, particularly regarding transparency, data protection, and consumer rights.

