Open-source AI models exposed to criminal misuse, researchers warn
Open-source artificial intelligence models are increasingly vulnerable to criminal misuse, as hackers can take control of computers running large language models outside the safeguards used by major AI platforms, according to new research released on Thursday. Researchers warned that compromised systems could be used for spam campaigns, phishing, disinformation, fraud, and other illicit activities while evading standard security controls.
The study was conducted over 293 days by cybersecurity firms SentinelOne and Censys, and examined thousands of internet-accessible deployments of open-source large language models. The researchers identified a wide range of potentially harmful use cases, including hacking, harassment, hate speech, theft of personal data, scams, and in some instances severe illegal content. They said hundreds of models appeared to have safety guardrails deliberately removed.
While thousands of open-source AI variants exist, a significant share of publicly accessible systems were based on models such as Meta’s Llama and Google DeepMind’s Gemma. The analysis focused on models deployed using Ollama, a tool that allows organizations to run their own AI systems. System prompts were visible in about a quarter of observed deployments, and 7.5% of those prompts could potentially enable harmful activity.
Researchers said roughly 30% of the identified systems were hosted in China and about 20% in the United States. Industry experts stressed that responsibility for mitigating risks must be shared across developers, deployers, and security teams, warning that unchecked open-source capacity poses growing global security concerns.










