EU Announces Guidelines to Prevent AI Misuse by Employers, Websites, and Police

The European Commission unveiled new guidelines on Tuesday aimed at curbing the misuse of artificial intelligence (AI) in various sectors, including employment, online services, and law enforcement. As part of the European Union’s broader AI regulations, the guidelines prohibit practices such as using AI to track employees’ emotions or to manipulate consumers into spending money online.

The guidelines are part of the EU’s Artificial Intelligence Act, which, while legally binding since last year, will be fully enforceable by August 2, 2026. Some provisions, such as those concerning specific AI practices, take effect earlier, including the ban on deceptive AI practices from February 2 this year.

Prohibited practices under the guidelines include the use of AI to create “dark patterns” on websites designed to manipulate users into making financial commitments, as well as AI applications that exploit individuals based on factors like age, disability, or socio-economic status. Additionally, social scoring systems that use personal data, such as race or origin, to categorize individuals are banned, alongside the use of biometric data by police to predict criminal behavior without proper verification.

Employers are also restricted from using surveillance tools like webcams or voice recognition systems to monitor employees’ emotions. The guidelines further prohibit the use of mobile CCTV cameras equipped with facial recognition for law enforcement, except under strict conditions with safeguards in place.

The EU has given member countries until August 2 to designate market surveillance authorities to enforce these AI rules. Companies found in violation could face hefty fines ranging from 1.5% to 7% of their global revenue. This comprehensive regulatory framework contrasts with the United States’ voluntary compliance approach and China’s focus on maintaining social stability through state-controlled AI.