Yazılar

UK Judge Warns Lawyers Against Using AI to Cite Fake Cases, Threatens Sanctions

London’s High Court issued a stern warning on Friday that lawyers who rely on artificial intelligence to cite fabricated or non-existent legal cases risk being held in contempt of court or facing criminal charges. The caution comes amid growing concerns about generative AI tools, such as ChatGPT, leading legal professionals astray.

Judge Victoria Sharp condemned lawyers in two recent cases who used AI-generated arguments containing fake case law. She urged legal regulators and industry leaders to take stronger actions to ensure lawyers understand their ethical duties regarding AI use.

“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” Judge Sharp said in her written ruling. She stressed the need for practical, effective measures from those responsible for legal regulation and leadership within the profession.

Since generative AI tools became widely accessible over the past two years, lawyers globally have faced scrutiny for referencing false authorities in court. Sharp emphasized that lawyers who cite non-existent cases breach their duty not to mislead courts, which can amount to contempt of court.

In the most severe instances, deliberately submitting false information with intent to disrupt justice could constitute the criminal offence of perverting the course of justice, she warned.

While legal regulators and the judiciary have issued guidance on AI use by lawyers, Judge Sharp said guidance alone is insufficient to curb misuse and called for stronger enforcement and leadership.

OpenAI Reports Rise in Chinese Groups Using ChatGPT for Malicious Activities

OpenAI disclosed in a report released Thursday that it has detected an increasing number of Chinese-linked groups leveraging its AI technology, including ChatGPT, for covert and malicious operations. Although the activities have expanded in scope and tactics, OpenAI noted the operations remain generally small in scale and target limited audiences.

Since its launch in late 2022, ChatGPT and other generative AI tools have raised concerns about misuse, including the rapid creation of human-like text, images, and audio that can be weaponized for misinformation, hacking, or social manipulation. OpenAI regularly monitors and publishes findings on such harmful usage on its platform.

Among the examples cited by OpenAI:

  • Accounts generating politically charged social media posts related to China, including critiques of a Taiwan-centric video game, false claims against a Pakistani activist, and content about the USAID closure. Some posts also criticized U.S. President Donald Trump’s tariffs with messages such as “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”

  • Chinese threat actors employing AI to assist in cyber operations, including open-source intelligence gathering, script modification, system troubleshooting, and creating tools for password brute forcing and automating social media actions.

  • Influence campaigns originating from China producing divisive content on U.S. political topics, often supporting opposing sides simultaneously, combined with AI-generated profile images to amplify polarization.

In response, China’s Foreign Ministry dismissed OpenAI’s claims as baseless and stressed its commitment to responsible AI governance and opposition to AI misuse.

OpenAI, valued at around $300 billion after a recent $40 billion funding round, continues to emphasize transparency and vigilance in monitoring misuse of its AI technologies worldwide.

EU Announces Guidelines to Prevent AI Misuse by Employers, Websites, and Police

The European Commission unveiled new guidelines on Tuesday aimed at curbing the misuse of artificial intelligence (AI) in various sectors, including employment, online services, and law enforcement. As part of the European Union’s broader AI regulations, the guidelines prohibit practices such as using AI to track employees’ emotions or to manipulate consumers into spending money online.

The guidelines are part of the EU’s Artificial Intelligence Act, which, while legally binding since last year, will be fully enforceable by August 2, 2026. Some provisions, such as those concerning specific AI practices, take effect earlier, including the ban on deceptive AI practices from February 2 this year.

Prohibited practices under the guidelines include the use of AI to create “dark patterns” on websites designed to manipulate users into making financial commitments, as well as AI applications that exploit individuals based on factors like age, disability, or socio-economic status. Additionally, social scoring systems that use personal data, such as race or origin, to categorize individuals are banned, alongside the use of biometric data by police to predict criminal behavior without proper verification.

Employers are also restricted from using surveillance tools like webcams or voice recognition systems to monitor employees’ emotions. The guidelines further prohibit the use of mobile CCTV cameras equipped with facial recognition for law enforcement, except under strict conditions with safeguards in place.

The EU has given member countries until August 2 to designate market surveillance authorities to enforce these AI rules. Companies found in violation could face hefty fines ranging from 1.5% to 7% of their global revenue. This comprehensive regulatory framework contrasts with the United States’ voluntary compliance approach and China’s focus on maintaining social stability through state-controlled AI.