AI Adoption Mirrors the Early Internet
The excitement, the blind trust, the security holes nobody wants to talk about. AI adoption today looks remarkably like the early days of the web - and we’re repeating the same mistakes.
The Security Parallel Nobody Wants to Hear
In the late 90s and early 2000s, the web was exploding. Everyone was building web applications. Few were building them securely. SQL injection was trivial to exploit and everywhere. Cross-site scripting was so common it was almost a feature. Developers were concatenating user input directly into SQL queries, echoing unescaped data into HTML, and trusting the client to behave. The mentality was simple: it works, ship it.
Prompt injection is the SQL injection of the AI era.
The pattern is similar. You have a system that interprets a mix of instructions and user-provided data, with no reliable boundary between the two. In SQL injection, the attacker escapes the data context and injects commands. In prompt injection, the attacker escapes the user message context and overrides system instructions. The underlying flaw is the same - untrusted input is mixed with trusted instructions in a shared channel, and the system cannot reliably distinguish between them.
We see AI tools that browse the web, read emails, execute code, and interact with APIs - all steered by a language model that can be manipulated through carefully crafted input. This is the equivalent of building a web application that takes user input and passes it directly to eval(). We know how that story ends. We lived through it.
And just like the early web, the industry response is a mix of denial and band-aids. Prompt injection defenses today resemble the early attempts at input sanitization - blacklists of “bad” phrases, secondary models checking outputs, instruction repetition. These are the addslashes() of AI security. They reduce the attack surface, they don’t eliminate the vulnerability class.
The Hype Blindness
Every major platform shift follows the same arc: excitement first, security later. We have decades of engineering wisdom telling us exactly what will go wrong, and we are ignoring it anyway, because that’s how adoption works.
When the web was new, pointing out security flaws made you a buzzkill. Companies were racing to get online. Investors were throwing money at anything with a .com in the name. Nobody wanted to hear that their shiny new e-commerce platform could be owned by a teenager with a URL bar. Security was a problem for later. Growth was the priority.
The AI space today is indistinguishable from this. Companies are rushing to integrate LLMs into everything. Customer support, code generation, document processing, financial analysis, medical triage. The pressure to ship AI features is enormous. The pressure to ship them safely is almost nonexistent.
Prompt injection attacks against real products have already been demonstrated publicly. Data exfiltration through indirect prompt injection - where a malicious payload is embedded in content the AI processes - is a proven attack vector. And yet organizations continue to deploy AI agents with broad permissions and thin defenses, because the demos are impressive and the board wants AI on the roadmap.
The Amazement Problem
New technology creates a kind of cognitive bias. When something feels like magic, people suspend their critical judgment.
The early internet had this effect. The idea that you could publish information to the entire world from your bedroom was genuinely revolutionary. It was so exciting that people overlooked fundamental problems. No encryption by default. No authentication standards. No concept of content security policies. The web was built on trust, and it took two decades of breaches, worms, and identity theft to bolt on the security it should have had from day one.
AI has the same aura of magic. A system that understands natural language, generates code, analyzes documents, and holds conversations? It feels intelligent. And that feeling is dangerous, because it leads people to grant it trust it hasn’t earned. They give it access to sensitive data. They let it make decisions. They assume it will behave as intended, because it seems to understand what they want.
This is anthropomorphization meeting engineering, and it’s a terrible combination. A language model doesn’t “understand” your security policy. It predicts tokens. The gap between those two things is where the vulnerabilities live.
We Will Probably Figure It Out
We will solve most of these problems. We solved them for the web - eventually.
SQL injection went from an epidemic to a rarity. Not because developers got smarter overnight, but because the ecosystem evolved. Parameterized queries became the default. ORMs abstracted away raw SQL. Frameworks escaped output automatically. Security scanners caught common patterns. The solution wasn’t one breakthrough - it was a gradual shift in defaults, tooling, and education.
The same will happen for AI. We’ll develop better architectures that separate instructions from data more reliably. Sandboxing and permission models for AI agents will mature. Standard security frameworks will emerge. The “just prompt it and hope for the best” era will eventually look as primitive as mysql_query("SELECT * FROM users WHERE name = '" . $_GET['name'] . "'") looks to us now.
But “eventually” is doing a lot of heavy lifting in that sentence. The web took roughly 15 years to go from “security is someone else’s problem” to “security by default.” During those 15 years, billions of records were stolen, countless systems were compromised, and real people suffered real consequences.
The question is not whether AI security will mature. It will. The question is how much damage we accept in the meantime because we’re too excited to slow down.