The European Union is increasing its scrutiny of major platforms over the risks associated with generalized artificial intelligence (GenAI) ahead of elections

The European Commission has issued formal requests for information (RFI) to several major tech companies, including Google, Meta (formerly Facebook), Microsoft, Snap, TikTok, and X (a subsidiary of Alphabet), regarding their handling of risks associated with the use of generative AI technology.

These requests are made under the Digital Services Act (DSA), which is the EU’s updated set of regulations governing ecommerce and online governance. The companies in question are designated as very large online platforms (VLOPs) under the DSA, requiring them to assess and mitigate systemic risks, including those related to generative AI.

The Commission is seeking more information from these platforms about their mitigation measures for risks associated with generative AI, such as the creation of false information (“hallucinations”), the dissemination of deepfakes, and the automated manipulation of services that can mislead voters. Additionally, the Commission is interested in understanding the impact of generative AI on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, and mental well-being.

Global Mobile News Digest, March 15th - Blog - MEF

Stress tests are also planned after Easter to assess the platforms’ readiness to handle generative AI risks, particularly in relation to potential political deepfakes ahead of the June European Parliament elections.

Election security has been identified as a priority area for enforcement by the EU, which oversees VLOPs’ compliance with the DSA rules. The Commission is currently consulting on election security rules for VLOPs and is working on producing formal guidance in this area.