India Proposes Tough AI Labelling Rules to Curb Deepfakes and Misinformation

India’s government has unveiled draft regulations requiring artificial intelligence and social media platforms to clearly label AI-generated content, in a sweeping effort to combat deepfakes and misinformation amid rising concerns over the technology’s misuse.

The proposed rules, released Wednesday by the Ministry of Electronics and Information Technology, would compel companies such as OpenAI, Google, Meta, and X to include visible AI markers covering at least 10% of a video or image’s surface area, or the first 10% of an audio clip’s duration, to indicate that the material was artificially created.

India — home to nearly 1 billion internet users — has faced an explosion of AI-generated deepfakes and false information, particularly during elections, in a country already divided along ethnic and religious lines. Officials warn that manipulated videos and fake news could incite violence and erode public trust.

Under the proposal, platforms must also ask users to declare whether their uploads are AI-generated and introduce technical safeguards to verify authenticity. The ministry said the rules aim to ensure “visible labelling, metadata traceability, and transparency for all public-facing AI media.”

The government cited a growing threat from generative AI tools capable of impersonating individuals, spreading propaganda, or manipulating elections. “The potential for harm has grown significantly,” it said in a statement inviting public and industry feedback by November 6.

Legal experts noted that the new labelling rule is one of the first in the world to set a quantifiable visibility standard. Dhruv Garg, founding partner of the Indian Governance and Policy Project, said it would require AI platforms to develop automated detection and tagging systems that identify synthetic content at the moment of creation.

The issue has already reached India’s courts. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan recently sued to block AI-generated videos using their likenesses, while challenging YouTube’s AI training policies.

India’s fast-growing digital landscape has made it a major market for AI firms. OpenAI CEO Sam Altman said in February that the country is the company’s second-largest market by user numbers, which have tripled in the past year.

AI Automation Startup UnifyApps Raises $50 Million, Names Sprinklr Founder as Co-CEO

UnifyApps, an AI automation startup that integrates enterprise systems to streamline routine business processes, has raised $50 million in a Series B funding round led by WestBridge Capital and appointed Sprinklr founder Ragy Thomas as its new chairman and co-CEO.

The fresh funding values the company at around $250 million, according to a source familiar with the matter. Investors including ICONIQ Capital also joined the round, bringing UnifyApps’ total funding to about $81 million since its launch in 2023.

Positioning itself as an “enterprise operating system for AI,” UnifyApps connects corporate software platforms such as Salesforce and Workday to large language models, helping businesses automate repetitive tasks like HR workflows, claims processing, and supply chain management.

Clients include Lowe’s, HDFC Bank, and Deutsche Telekom, which use UnifyApps’ technology to boost efficiency across departments. The company reported a sevenfold increase in annual revenue, though it did not disclose figures.

Thomas, who built Sprinklr into a billion-dollar customer experience firm, said UnifyApps’ edge lies in being purpose-built for AI—unlike older automation players such as UiPath and Automation Anywhere, which are retrofitting legacy platforms to include AI features. “We’re not layering AI on top of old systems—we’re rethinking the operating model around it,” he told Reuters.

Co-founder Pavitar Singh will continue to serve as co-CEO. The company plans to use the new funds to expand its 400-person workforce by over 100 employees, enhance its AI platform, and strengthen its presence in Europe.

The surge of investment reflects growing demand for enterprise AI integration tools, even as research from MIT shows that 95% of corporate AI projects have yet to deliver meaningful returns—underscoring the difficulty of translating hype into productivity.

Iraq Bans Roblox Over Child Safety and Moral Concerns

The Iraqi government has announced a nationwide ban on the U.S.-based gaming platform Roblox (RBLX.O), citing child safety and moral concerns, as part of a wider crackdown across the Middle East on online games and virtual worlds.

Officials said the decision followed a comprehensive government study and field monitoring, which found that Roblox enabled direct communication between users — a feature they claimed exposed children and adolescents to online exploitation, cyber-extortion, and harmful behavior. The government also said the game’s content was “incompatible with Iraq’s social values and traditions.”

Roblox Corporation responded that safety was its top priority and expressed interest in working with Iraqi authorities to restore access. “We strongly contest recent claims made by the Iraqi authorities, which we believe are based on an outdated understanding of our platform,” a company spokesperson said.

The spokesperson added that Roblox had already suspended certain communication features, such as in-game chat, in Arabic-speaking regions, including Iraq, earlier this year as part of ongoing safety updates.

The Iraqi Ministry of Communications stated that the platform “involves several security, social, and behavioral risks,” emphasizing that the move was taken to protect young users.

The ban aligns Iraq with other Middle Eastern nations that have tightened regulation of digital entertainment platforms. In August 2024, Turkey similarly blocked access to Roblox, citing risks of child exploitation and abuse.

Analysts say the decision reflects a broader regional effort to regulate online gaming and interactive media, balancing youth protection with the growing popularity of global virtual platforms.