Key Points
- Italy’s antitrust regulator (AGCM) has closed its investigation into the AI system DeepSeek.
- The investigation was about whether DeepSeek adequately warned users about the risk of “hallucinations” or false information.
- DeepSeek agreed to binding commitments to make its disclosures more transparent and immediate.
- The AGCM approved the proposed changes and closed the case.
Italy’s antitrust authority has dropped its investigation into the Chinese AI system DeepSeek. The regulator, known as the AGCM, had been investigating whether the company was adequately warning users that its AI can sometimes generate misinformation. The case is now closed following DeepSeek’s agreement to make changes to its operations.
The investigation started last June. The Italian watchdog was concerned that DeepSeek wasn’t being clear about the risk of “hallucinations.” This is the term for when an AI model, in response to a user’s question, generates information that is inaccurate, misleading, or completely fabricated.
To avoid a long legal battle, the two Chinese companies that own and operate DeepSeek proposed a set of “binding commitments.” They agreed to implement a package of measures to make it clearer to users that the AI may not always be accurate.
The AGCM said in its weekly bulletin that it was satisfied with the proposal. “The commitments presented by DeepSeek make disclosures about the risk of hallucinations easier, more transparent, intelligible, and immediate,” the bulletin stated.
This is a significant move as governments worldwide grapple with how to regulate the fast-moving AI industry. By requiring DeepSeek to be more transparent about its limitations, the Italian regulator is setting a precedent that could influence how other countries address AI-generated misinformation.
For now, DeepSeek has avoided a fine or other penalty, but it will have to follow through on its commitments to keep the Italian authorities satisfied.