Yazılar

Italy Closes Probe Into DeepSeek After Commitments to Warn Users of AI “Hallucination” Risks

Italy’s antitrust authority has closed an investigation into Chinese artificial intelligence system DeepSeek after the company agreed to binding commitments aimed at improving warnings about the risk of AI-generated false information.

The probe, launched last June by Italy’s antitrust and consumer protection authority AGCM, focused on allegations that DeepSeek failed to adequately inform users that its AI system could generate inaccurate, misleading, or fabricated content — commonly referred to as “hallucinations.”

The decision to end the investigation was announced in the AGCM’s weekly bulletin published on Monday. According to the regulator, the commitments were submitted by Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence, which jointly own and operate the DeepSeek platform.

The agreed measures include clearer and more prominent disclosures explaining the risk that, based on user inputs, the AI model may produce outputs containing incorrect or invented information. The AGCM said the new disclosures are designed to be more transparent, intelligible, and immediately visible to users.

“The commitments presented by DeepSeek make disclosures about the risk of hallucinations easier, more transparent, intelligible, and immediate,” the authority said in its bulletin.

The case highlights growing regulatory scrutiny across Europe over how AI systems communicate their limitations to users, particularly as generative AI tools become more widely adopted in consumer-facing applications.

Italy probes Revolut over alleged unfair practices in investment services

Italy’s competition authority (AGCM) has launched an investigation into British fintech giant Revolut, focusing on allegations of unfair commercial practices related to its investment and banking services. The watchdog claims Revolut misled users by promoting zero-commission investments without clearly disclosing additional costs and limitations.

According to AGCM, Revolut failed to inform customers that its zero-fee products involve fractional shares, which differ significantly from whole stocks in voting and transfer rights. The regulator also highlighted that Revolut did not clearly warn crypto asset investors that stop-loss and take-profit settings—tools typically used to manage investment risks—could not be modified.

AGCM further accused Revolut of adopting an aggressive stance by suspending and blocking financial accounts without sufficient notice or assistance, restricting customer access to cash and related services for prolonged periods.

Inspections were carried out at Revolut Bank UAB’s Italian offices by AGCM and Italy’s finance police. Revolut said it is fully cooperating with the probe and remains committed to compliance and customer protection but declined to comment on specific details as the investigation is ongoing.

Revolut, valued at $45 billion last year, is one of the most successful European digital-only fintechs. The company aims to expand into mortgages and consumer lending to compete with traditional banks and grow its presence in the U.S.

Under Italian law, violations of consumer rights can result in fines ranging from €5,000 to €10 million.

Italy Probes Chinese AI Firm DeepSeek Over Misinformation Risks

Italy’s antitrust and consumer protection agency AGCM announced Monday it has launched a formal investigation into Chinese artificial intelligence startup DeepSeek, alleging the company failed to clearly warn users about the potential for its chatbot to generate false or misleading information.

The regulator stated that DeepSeek’s platform does not provide “sufficiently clear, immediate and intelligible” alerts about the risk of AI-generated “hallucinations” — a term used in the AI field to describe instances when models produce inaccurate or completely fabricated information in response to user prompts.

AGCM is focusing on the consumer rights aspect, emphasizing the risk users might unknowingly rely on erroneous AI outputs due to insufficient warning or transparency.

DeepSeek did not immediately respond to requests for comment on the investigation.

This marks the second run-in DeepSeek has had with Italian authorities this year. In February, the country’s data protection regulator ordered the startup to suspend access to its chatbot within Italy after the company failed to resolve concerns related to its privacy policy.

The probe highlights increasing regulatory scrutiny over generative AI models in Europe, particularly regarding transparency, data protection, and consumer rights.