Yazılar

Claim That Any Phone Can Be Tracked via Google Maps by Email Is False

A viral claim suggesting that anyone can locate a mobile phone simply by emailing Google and using a phone number is inaccurate and misleading, cybersecurity experts say.

Posts circulating online allege that sending an email through Gmail to a specific address can trigger Google Maps to reveal a device’s location, even without internet access. Google does not offer any such service, and there is no official mechanism that allows location tracking of a phone solely via an email request or partial phone number.

Legitimate phone-tracking tools require explicit user consent and account access, such as Google’s “Find My Device” for Android or Apple’s “Find My” for iPhone. These services work only when users are logged in and have location sharing enabled.

Security specialists warn that messages promoting email-based tracking may be linked to scams or data-harvesting attempts. Users who follow such instructions could expose personal information without gaining any real tracking capability.

Authorities and privacy advocates stress that tracking a phone without permission is illegal in many countries. Users are advised to rely only on official tools provided by device makers and to report misleading claims that promise effortless or universal phone tracking.

Denmark Moves to Ban AI Deepfakes, Giving Citizens Copyright Over Their Own Likeness

Denmark is preparing to pass one of the world’s toughest laws against AI-generated deepfakes, aiming to give citizens new legal rights over their appearance, voice, and likeness online. The bill — expected to pass early next year — would make it illegal to share or distribute deepfake content without a person’s consent, extending copyright protections to individuals.

The proposed legislation follows growing concern about the rapid spread of deepfakes — hyper-realistic AI-generated videos, images, or audio that impersonate real people. Danish Culture Minister Jakob Engel-Schmidt said the move is essential to protect both private citizens and democracy itself, warning that political deepfakes could “undermine our democracy” by spreading falsehoods.

Under the new law, Danes would be able to demand takedowns of AI-generated content that misuses their likeness, while parody and satire would remain protected. Major tech platforms that fail to remove harmful deepfakes could face significant fines, although individuals are unlikely to face criminal penalties.

Experts have praised the move as a landmark step. “When people ask, ‘what can I do to protect myself from being deepfaked,’ the answer right now is basically nothing,” said Henry Ajder, a generative AI researcher and founder of Latent Space Advisory. “Denmark is one of the first governments to change that.”

The Danish proposal mirrors similar measures abroad. The United States recently criminalized the sharing of non-consensual intimate deepfakes, while South Korea introduced harsh penalties for deepfake pornography. Denmark’s initiative could now influence European Union policy, with France and Ireland reportedly showing interest in adopting similar laws.

For victims like Marie Watson, a Danish video game streamer whose photos were digitally altered and shared online, the legislation comes too late to undo the damage but offers hope for future protection. “When it’s online, you’re done. You can’t do anything,” she said. “It’s out of your control.”

India Proposes Tough AI Labelling Rules to Curb Deepfakes and Misinformation

India’s government has unveiled draft regulations requiring artificial intelligence and social media platforms to clearly label AI-generated content, in a sweeping effort to combat deepfakes and misinformation amid rising concerns over the technology’s misuse.

The proposed rules, released Wednesday by the Ministry of Electronics and Information Technology, would compel companies such as OpenAI, Google, Meta, and X to include visible AI markers covering at least 10% of a video or image’s surface area, or the first 10% of an audio clip’s duration, to indicate that the material was artificially created.

India — home to nearly 1 billion internet users — has faced an explosion of AI-generated deepfakes and false information, particularly during elections, in a country already divided along ethnic and religious lines. Officials warn that manipulated videos and fake news could incite violence and erode public trust.

Under the proposal, platforms must also ask users to declare whether their uploads are AI-generated and introduce technical safeguards to verify authenticity. The ministry said the rules aim to ensure “visible labelling, metadata traceability, and transparency for all public-facing AI media.”

The government cited a growing threat from generative AI tools capable of impersonating individuals, spreading propaganda, or manipulating elections. “The potential for harm has grown significantly,” it said in a statement inviting public and industry feedback by November 6.

Legal experts noted that the new labelling rule is one of the first in the world to set a quantifiable visibility standard. Dhruv Garg, founding partner of the Indian Governance and Policy Project, said it would require AI platforms to develop automated detection and tagging systems that identify synthetic content at the moment of creation.

The issue has already reached India’s courts. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan recently sued to block AI-generated videos using their likenesses, while challenging YouTube’s AI training policies.

India’s fast-growing digital landscape has made it a major market for AI firms. OpenAI CEO Sam Altman said in February that the country is the company’s second-largest market by user numbers, which have tripled in the past year.