The OpenAI breach serves as a reminder that AI companies are prime targets for hackers
There’s no need to worry that your secret ChatGPT conversations were obtained in a recently reported breach of OpenAI’s systems. The hack itself, while troubling, appears to have been superficial — but it’s a reminder that AI companies have quickly become one of the most attractive targets for hackers.
The New York Times reported the hack in more detail after former OpenAI employee Leopold Aschenbrenner hinted at it recently in a podcast. He called it a “major security incident,” but unnamed company sources told the Times the hacker only accessed an employee discussion forum. (I reached out to OpenAI for confirmation and comment.)
No security breach should be treated as trivial, and eavesdropping on internal OpenAI development discussions certainly has its value. But it’s far from a hacker getting access to internal systems, models in progress, secret roadmaps, and so on.
But it should still concern us, not necessarily because of the threat of adversaries like China overtaking us in the AI arms race. The simple fact is that these AI companies have become gatekeepers to a tremendous amount of very valuable data.
Let’s talk about three kinds of data OpenAI and, to a lesser extent, other AI companies create or have access to: high-quality training data, bulk user interactions, and customer data.