Yazılar

Open-source AI models exposed to criminal misuse, researchers warn

Open-source artificial intelligence models are increasingly vulnerable to criminal misuse, as hackers can take control of computers running large language models outside the safeguards used by major AI platforms, according to new research released on Thursday. Researchers warned that compromised systems could be used for spam campaigns, phishing, disinformation, fraud, and other illicit activities while evading standard security controls.

The study was conducted over 293 days by cybersecurity firms SentinelOne and Censys, and examined thousands of internet-accessible deployments of open-source large language models. The researchers identified a wide range of potentially harmful use cases, including hacking, harassment, hate speech, theft of personal data, scams, and in some instances severe illegal content. They said hundreds of models appeared to have safety guardrails deliberately removed.

While thousands of open-source AI variants exist, a significant share of publicly accessible systems were based on models such as Meta’s Llama and Google DeepMind’s Gemma. The analysis focused on models deployed using Ollama, a tool that allows organizations to run their own AI systems. System prompts were visible in about a quarter of observed deployments, and 7.5% of those prompts could potentially enable harmful activity.

Researchers said roughly 30% of the identified systems were hosted in China and about 20% in the United States. Industry experts stressed that responsibility for mitigating risks must be shared across developers, deployers, and security teams, warning that unchecked open-source capacity poses growing global security concerns.

Meta partners with Arm to boost AI recommendations across Facebook and Instagram

Meta Platforms announced a new partnership with chip technology firm Arm Holdings to power the AI systems behind its personalization and recommendation engines across Facebook and Instagram. The collaboration marks another milestone for Arm as it pushes deeper into data center and AI computing — areas long dominated by Intel and AMD’s x86 architecture.

Meta will deploy Arm-based data center platforms to run the ranking and recommendation algorithms that determine what users see on its apps. Both companies said the shift will deliver higher performance and improved energy efficiency compared to traditional x86 systems.

Arm, backed by Japan’s SoftBank, provides the chip designs that serve as blueprints for central processing units (CPUs) used in billions of devices worldwide. While its technology already dominates smartphones, it is rapidly expanding into server and personal computer markets.

As part of the announcement, Meta revealed a $1.5 billion investment in a new Texas data center, its 29th facility globally, to support AI infrastructure growth. The two companies also said they have optimized Meta’s AI software for Arm chips and made the improvements open source, allowing developers to freely use and build upon them — a move expected to speed up Arm’s adoption in cloud computing.

Meta and Arm plan to continue refining their joint open-source projects to make AI workloads more efficient and accessible across the industry.

Study Finds AI Tools Slow Down Experienced Software Developers in Familiar Codebases

A new study challenges the common assumption that artificial intelligence always speeds up software development. Conducted by AI research nonprofit METR, the study focused on seasoned developers working with Cursor, a popular AI coding assistant, within open-source projects they knew well. Contrary to their expectations, these experienced developers took 19% longer to complete tasks when using AI compared to working without it.

Before the study, developers predicted AI would speed up their work by about 20-24%, but the actual results showed the opposite. The study’s lead authors, Joel Becker and Nate Rush, expressed surprise at the findings, with Rush originally anticipating a potential twofold productivity increase.

These findings complicate the popular narrative that AI tools dramatically boost the productivity of highly skilled engineers—a claim that has helped fuel heavy investment in AI-powered software development products. While AI is often touted as a way to replace entry-level coding jobs, the METR study reveals that its benefits may not extend to all developers or coding scenarios.

Previous research has shown significant AI-driven productivity gains, with some studies citing up to 56% faster coding speeds or 26% more tasks completed in a given time. However, METR’s work highlights that these improvements might be more relevant to junior developers or those unfamiliar with complex codebases. Experienced developers, intimately aware of the nuances of mature open-source projects, tended to slow down because they spent extra time reviewing and fixing AI suggestions.

Becker noted that while AI-generated code was often on the right track, it frequently required careful correction to meet precise needs. The study authors emphasized that the slowdown was specific to the context of experienced developers working in familiar environments and might not occur in other development settings.

Despite the slower task completion times, most participants, including the study authors, continue to use Cursor, finding that AI makes coding less effortful and more enjoyable—comparable to editing an essay rather than starting from scratch. Becker explained, “Developers have goals other than completing the task as soon as possible. So they’re going with this less effortful route.”