Law Firm Dechert Says Lawsuits Over Alleged Use of Hired Hackers Have Been Resolved

Philadelphia-based law firm Dechert announced on Thursday that two U.S. lawsuits accusing it of employing hired hackers to gain courtroom advantages have been resolved without any admission of liability.

The lawsuits stem from claims made by aviation executive Farhad Azima, who in 2022 filed suit in federal court in Manhattan against Dechert, U.S. public relations professionals, and a private investigator. Azima alleged they orchestrated the hacking and leaking of his emails. A related lawsuit was also filed in North Carolina against private investigator Nicholas Del Rosso with similar allegations.

While Dechert had settled with Azima last year, proceedings against other defendants—including Israeli private investigator Amit Forlit, lawyer Amir Handjani, and New York PR firm Karv Communications—continued until recently. Legal documents indicate that motions to dismiss both the New York and North Carolina lawsuits with prejudice were filed late Wednesday.

Azima expressed satisfaction with the outcome, stating, “I am thrilled and feel vindicated.” However, neither Azima, Dechert, nor the other parties disclosed details of the resolution or whether any new settlements were reached.

Dechert, Handjani, Karv, and Karv’s president Andrew Frank released identical statements confirming that all claims have been resolved without any admission of liability. Representatives for Del Rosso and Forlit did not respond to requests for comment.

Azima was previously found liable for fraud by a London court in 2020, a case heavily influenced by leaked private emails. He later accused Dechert—then representing a Middle Eastern investment fund involved in the case—of facilitating the email leaks. Following a Reuters investigation into email hacking linked to court cases, Azima successfully had his UK judgments overturned.

Forlit, accused by Azima as a key conspirator, is currently contesting extradition to the U.S. on separate cybercrime charges and has denied involvement in hacking.

Florida Attorney General Investigates Robinhood Crypto Over Low-Cost Trading Claims

Florida Attorney General James Uthmeier has initiated an investigation into Robinhood Crypto, scrutinizing whether the platform misled users by advertising itself as the cheapest option for buying cryptocurrencies. The AG’s office announced on Thursday that it has issued a subpoena to Robinhood Crypto, a division of Robinhood Markets, seeking internal documents related to potential breaches of Florida’s Deceptive and Unfair Trade Practices Act.

Uthmeier emphasized the need for transparency in cryptocurrency transactions, stating, “When consumers buy and sell crypto assets, they deserve transparency in their transactions.” He added that Robinhood’s longstanding claim of being the “best bargain” appears to be deceptive.

Robinhood allows customers to trade stocks and cryptocurrencies without charging direct commissions. Instead, the company earns revenue by routing orders to third-party firms that pay Robinhood, a practice known as payment for order flow (PFOF).

In response, Robinhood’s General Counsel Lucas Moskowitz said the company provides clear pricing information throughout the trading process, including details on spreads, fees, and Robinhood’s revenue from trades. He defended Robinhood’s position as a platform offering “crypto trading at the lowest cost on average.”

EU Unveils Draft AI Code of Practice Focusing on Copyright and Safety for Companies

The European Commission revealed a draft code of practice on Thursday aimed at helping companies comply with the European Union’s evolving artificial intelligence regulations. The voluntary code emphasizes safeguarding copyright-protected content and implementing measures to reduce systemic risks linked to AI technologies.

Developed by 13 independent experts, the code is part of the broader EU AI regulatory framework. While signing up is optional, companies that do not join will miss out on the legal certainty offered to adherents. The rules will apply to major AI providers including Alphabet (Google), Meta (Facebook), OpenAI, Anthropic, Mistral, and others.

Under the code, signatories must publish summaries detailing the data sources used to train their general-purpose AI models. They are required to ensure that copyright-protected materials are only used appropriately, especially when employing web crawlers, and must take steps to prevent outputs that infringe copyright.

To address systemic risks, companies will also need to establish frameworks to identify and analyze potential hazards. While transparency and copyright guidelines apply to all general-purpose AI providers, specific safety and security provisions target providers of advanced models like OpenAI’s ChatGPT, Meta’s Llama, Google’s Gemini, and Anthropic’s Claude.

The EU’s AI Act, effective since last June, imposes strict transparency rules on high-risk AI systems and lighter obligations for general-purpose AI models. It also regulates AI use in military, crime, and security contexts. The new AI rules for large language models will become legally binding on August 2, with enforcement beginning a year later for new models. Existing models will have until August 2, 2027, to comply.

Henna Virkkunen, the EU’s technology commissioner, encouraged AI stakeholders to adopt the code, highlighting its collaborative design and its role in simplifying compliance with the EU AI Act. The code’s final approval by EU member states and the Commission is expected by the end of the year.