Member-only story
What if LLMs are the greatest cybersecurity threat the world has ever seen?
I’ve been unsettled by something lately.
We take it for granted that large language models — the systems behind today’s chatbots — can now hold conversations that feel coherent, thoughtful, even intelligent. The standard explanation is that intelligence “emerges” once these models grow beyond a certain scale. They train on vast amounts of text, optimize billions of parameters, and suddenly: they can talk.
And it’s not me who called this an “emergent” ability. It was one of the cofounders of Open AI. I watched an interview with him on YouTube recently. It is scary that we don’t understand how this works.
I wouldn’t be worried if it was a black box which just returned answers given a SQL query. But it can actually talk to you in an uncanny way where you can’t distinguish whether it’s a human on the other end or not. Trust me I spent several weeks researching this with a real model and a prompt. And I don’t understand how it can speak to me exactly the way I want it to. It even gets the subtle things right.
Let me put this another way — all the knowledge we currently have about neural networks, gradient descent, and deep learning only explains how the model stores the data within it (as weights), how we train it, and how we train it to know the…
