Canada Hits Crypto Firm Xeltox with Record C$176.9 Million Fine for Money Laundering

Canada’s anti-money laundering watchdog FINTRAC has imposed a record C$176.9 million ($126 million) penalty on Xeltox Enterprises Limited, citing the company’s failure to report suspicious transactions linked to child sexual abuse material, fraud, ransomware, and sanctions evasion.

The fine marks the largest enforcement action in FINTRAC’s history, underscoring Ottawa’s growing crackdown on financial crime in the crypto industry.

Xeltox, also known as Cryptomus and previously operating as Certa Payments Limited, is registered as a money services business in British Columbia. The company could not be reached for comment.

“Given that numerous violations in this case were connected to trafficking in child sexual abuse material, fraud, ransomware payments and sanctions evasion, FINTRAC was compelled to take this unprecedented enforcement action,” the agency said in a statement.

FINTRAC said Xeltox repeatedly failed to submit suspicious transaction reports when there were reasonable grounds to suspect links to criminal activity. The firm also did not report receipts of over C$10,000 in virtual currency as required under Canadian law.

The announcement comes amid a broader national push to combat money laundering. Earlier this week, the federal government unveiled plans for a new agency focused on fraud prevention, anti-money laundering efforts, and asset recovery.

Canada will also undergo an audit by the Financial Action Task Force (FATF) next month, a key global body assessing compliance with international standards on financial crime.

Just last month, FINTRAC issued a C$19.6 million penalty against Peken Global Limited, operator of the KuCoin crypto exchange, which had been the largest fine until now. KuCoin has appealed, calling the sanction “excessive and punitive.”

The new penalty against Xeltox signals that Canadian regulators are escalating their enforcement stance, targeting crypto intermediaries that fail to meet anti-money laundering and counter-terrorism financing obligations.

Australia Orders AI Chatbot Firms to Reveal Child Protection Measures

Australia’s internet regulator has ordered four AI chatbot companies to disclose what steps they are taking to protect children from harmful and sexual content, in the country’s latest move to tighten oversight of artificial intelligence.

The eSafety Commissioner said it sent legal notices to Character Technologies — the creator of the celebrity chatbot platform Character.ai — along with Glimpse.AI, Chai Research, and Chub AI, demanding detailed reports on how they prevent child sexual exploitation, exposure to pornography, and content promoting suicide or eating disorders.

“There can be a darker side to some of these services,” said Commissioner Julie Inman Grant, warning that many chatbots can engage in sexually explicit conversations with minors and even encourage self-harm or disordered eating.

Under Australia’s Online Safety Act, the regulator can compel companies to disclose their internal safety protocols or face fines of up to A$825,000 ($536,000) per day.

The crackdown follows growing concern about AI companions forming emotional or sexual bonds with teenagers. Some Australian schools have reported students as young as 13 spending more than five hours daily interacting with chatbots, sometimes in explicit exchanges.

The most prominent firm targeted, Character.ai, faces a lawsuit in the U.S. after a mother alleged her 14-year-old son died by suicide following interactions with an AI companion. The company has denied wrongdoing, saying it added pop-up safety warnings and links to suicide prevention hotlines for users expressing self-harm thoughts.

The eSafety office said it did not include OpenAI in this round of inquiries, as ChatGPT is covered under a separate industry code that takes effect in March 2026.

Australia, already known for its strict digital regulation, will introduce new rules in December requiring social media firms to block or deactivate accounts of users under 16 or risk penalties of up to A$49.5 million.

The move positions Australia at the forefront of AI child safety regulation, as governments worldwide race to address the unintended dangers of increasingly lifelike AI companions.

Metagenomi Uses Amazon’s AI Chips to Power Next-Gen Gene Editing

Biotech company Metagenomi (MGX.O) has begun using Amazon Web Services’ custom AI chips to accelerate the discovery of new gene-editing technologies, marking one of the first major biotech applications of Amazon’s in-house silicon beyond large language models and chatbots.

The Emeryville, California-based firm, which is developing tools to deliver gene therapies directly into human cells, said AWS Inferentia chips have given it a major cost advantage over Nvidia’s AI hardware, cutting computational expenses by about half while maintaining comparable performance.

Metagenomi’s approach relies heavily on artificial intelligence to design and test enzymes capable of safely editing DNA. The company scans nature for rare proteins that might serve as effective delivery vehicles for genetic material and then uses AI to generate millions of variants in search of the most effective designs.

“We generated over a million different proteins from a rare class of enzymes used in gene editing,” said Chris Brown, Metagenomi’s head of discovery. “It was a clear cost advantage to use the Inferentia platform. Unless you cast a broad enough net early, you risk missing key breakthroughs entirely.”

Amazon’s Inferentia chips, first introduced in 2019 to enhance the AI capabilities of its Alexa virtual assistant, are now being used by other industries beyond software — with biotechnology emerging as a new frontier for AI-driven hardware.

By applying cloud-based AI to the complex problem of gene delivery and editing, Metagenomi hopes to make treatments for genetic disorders faster and more affordable, while demonstrating how custom AI infrastructure can accelerate scientific discovery.