Yazılar

US senators unveil bill to curb scam ads on social media platforms

Two U.S. senators have introduced bipartisan legislation aimed at forcing social media platforms to take greater responsibility for fraudulent advertising. Senators Ruben Gallego and Bernie Moreno said the proposed Safeguarding Consumers from Advertising Misconduct Act, or SCAM Act, would require platforms to take “reasonable steps” to prevent scam ads or face enforcement by the Federal Trade Commission and state attorneys general.

The bill would mandate verification of advertisers’ identities or the legal existence of businesses, and require platforms to quickly review and act on reports of fraudulent ads. Supporters say social media companies have become a major conduit for online scams by relaxing advertiser checks to protect ad revenues.

The proposal follows a Reuters investigation that cited internal documents at Meta Platforms estimating that scam and illicit ads could account for a significant share of revenue. Meta has disputed those figures and said it actively combats fraud. The legislation is backed by the American Bankers Association and consumer groups such as AARP, and would allow state authorities to bring civil action against non-compliant platforms.

India’s top court questions WhatsApp data sharing with Meta

India’s Supreme Court has warned it could reinstate restrictions on WhatsApp sharing user data with other Meta entities, raising fresh concerns over privacy and consent. During a hearing on Tuesday, the chief justice said WhatsApp’s privacy policy appeared to be designed in a way that could mislead users, particularly those with limited digital literacy.

The case stems from a 2024 ruling by India’s antitrust authority, which fined WhatsApp $25.4 million and barred data sharing for advertising purposes for five years. An appeals court later lifted the data-sharing ban while keeping the fine, prompting both sides to approach the Supreme Court.

India is Meta’s largest market by users, and WhatsApp has argued that restrictions could force it to roll back features. The Supreme Court did not issue a final decision and is expected to continue hearings next week.

Open-source AI models exposed to criminal misuse, researchers warn

Open-source artificial intelligence models are increasingly vulnerable to criminal misuse, as hackers can take control of computers running large language models outside the safeguards used by major AI platforms, according to new research released on Thursday. Researchers warned that compromised systems could be used for spam campaigns, phishing, disinformation, fraud, and other illicit activities while evading standard security controls.

The study was conducted over 293 days by cybersecurity firms SentinelOne and Censys, and examined thousands of internet-accessible deployments of open-source large language models. The researchers identified a wide range of potentially harmful use cases, including hacking, harassment, hate speech, theft of personal data, scams, and in some instances severe illegal content. They said hundreds of models appeared to have safety guardrails deliberately removed.

While thousands of open-source AI variants exist, a significant share of publicly accessible systems were based on models such as Meta’s Llama and Google DeepMind’s Gemma. The analysis focused on models deployed using Ollama, a tool that allows organizations to run their own AI systems. System prompts were visible in about a quarter of observed deployments, and 7.5% of those prompts could potentially enable harmful activity.

Researchers said roughly 30% of the identified systems were hosted in China and about 20% in the United States. Industry experts stressed that responsibility for mitigating risks must be shared across developers, deployers, and security teams, warning that unchecked open-source capacity poses growing global security concerns.