Yazılar

US warns of escalating Iranian cyberattacks on infrastructure

U.S. authorities have warned that Iranian-backed hacking campaigns targeting critical infrastructure have intensified following the escalation of regional hostilities.

According to a joint advisory issued by agencies including the FBI, National Security Agency and Cybersecurity and Infrastructure Security Agency, attackers are focusing on industrial control systems widely used across essential sectors.

Targets and Methods

The hackers are primarily exploiting:

  • Programmable Logic Controllers (PLCs)
  • SCADA systems (Supervisory Control and Data Acquisition)

These systems are critical for operating infrastructure such as:

  • Energy grids
  • Water and wastewater facilities
  • Government service systems

Attack techniques include:

  • Manipulating system display data
  • Extracting sensitive operational configurations
  • Interfering with real-time control processes

In several cases, the activity has already resulted in operational disruption and financial losses.

Strategic Intent

U.S. officials assess that the campaigns aim to create “disruptive effects” within the United States, signaling a shift from espionage toward potential sabotage.

The warning aligns with broader geopolitical tensions involving Iran and the United States, with threats extending to infrastructure targets both domestically and across the Gulf region.

Agencies Involved

The advisory was jointly issued by multiple agencies, including:

  • Federal Bureau of Investigation
  • National Security Agency
  • Cybersecurity and Infrastructure Security Agency
  • Environmental Protection Agency
  • Department of Energy
  • U.S. Cyber Command’s Cyber National Mission Force

Risk Implications

The targeting of industrial control systems is particularly concerning because:

  • Many are internet-exposed with weak security configurations
  • They often run legacy software with limited patching
  • Disruption can have physical-world consequences, not just digital impact

Outlook

The escalation indicates a broader trend:

  • Cyber operations are increasingly integrated into geopolitical conflict
  • Critical infrastructure is becoming a primary attack surface
  • Defensive readiness for industrial systems is now a national security priority

Organizations operating ICS/SCADA environments are likely to face heightened pressure to:

  • Harden network exposure
  • Implement real-time monitoring
  • Segment operational technology (OT) from IT systems

Anthropic Launches AI Cybersecurity Initiative With Big Tech Partners

Anthropic has unveiled a new cybersecurity initiative, “Project Glasswing,” in collaboration with major technology firms including Amazon, Microsoft and Apple.

The program provides selected partners with early access to an advanced AI model, “Claude Mythos Preview,” designed for defensive cybersecurity applications. Additional collaborators include CrowdStrike, Palo Alto Networks, Google and Nvidia.

Anthropic stated that the model has already identified thousands of critical vulnerabilities across operating systems, browsers and other software, demonstrating its potential as a tool for proactive threat detection and mitigation.

The initiative emerges amid growing concerns over AI-driven cyberattacks. Industry discussions, including those at recent cybersecurity conferences, have increasingly focused on whether traditional security tools can keep pace with AI-enabled threats.

Under Project Glasswing, partner organizations will deploy the model in controlled environments to strengthen defensive capabilities. Anthropic also plans to share findings across the industry to improve overall cybersecurity resilience.

The company is extending access to around 40 additional organizations responsible for critical infrastructure and has committed up to $100 million in usage credits, along with $4 million in funding for open-source security initiatives.

Anthropic confirmed ongoing discussions with U.S. government agencies regarding the model’s capabilities and risk profile, reflecting heightened regulatory and national security interest in advanced AI systems.

The move underscores a broader industry shift: as AI becomes both a tool for attackers and defenders, leading technology firms are increasingly collaborating to build collective cybersecurity defenses.

Meta Platforms Suspends Collaboration with AI Partner Mercor Following Data Breach Reports

Meta has reportedly put all collaboration with AI recruitment firm Mercor on hold following a recent cyberattack that targeted the startup. The Menlo Park-based tech giant was among Mercor’s largest clients, relying on the company to hire subject matter experts who validate and perform quality analysis on outputs from large language models (LLMs). The breach is said to have compromised hundreds of gigabytes of sensitive data, prompting Mercor to launch an internal investigation into the incident.

According to a report by Wired, Meta’s pause on work with Mercor is indefinite. The publication cites unnamed sources familiar with the matter, who also noted that other major AI companies are reassessing their partnerships with the firm in the wake of the cyberattack. The move reflects growing caution within the AI industry, as companies evaluate the security and integrity of third-party partners that handle sensitive model validation work.

Mercor, founded in 2023, specializes in hiring domain experts to conduct quality checks on AI outputs. The startup has worked with several leading AI companies, including OpenAI and Anthropic, to ensure that large language models deliver accurate and reliable responses. Outsourcing this work allows AI firms to maintain model performance standards while continuously improving their systems based on expert feedback.

The company has attracted significant investment, having raised $350 million (roughly Rs. 3,257 crore) in a Series C funding round in October 2025, which valued Mercor at $10 billion (around Rs. 93,067 crore). Despite its rapid growth and high-profile partnerships, the recent security breach poses a serious challenge, highlighting the risks associated with handling large volumes of sensitive AI data and emphasizing the importance of cybersecurity in AI operations.