The Emperor’s New Agent

I spent last weekend rewiring my home AI gateway — a self-hosted system called OpenClaw that connects to my messaging apps, routes requests to different AI models, and executes tasks on my behalf. After a few hours of configuring event listeners, setting up scheduled jobs, and connecting callable libraries, I sat back. I realized something that should bother everyone in the defense and technology space.
I had just built what the industry is calling an “AI agent.” And the architecture underneath it was identical to what I was building 20 years ago.
Same Engine, New Paint Job
Here is what my so-called agent actually does. It sits in an environment, listening for an event — a Signal message, a scheduled timer, a file change. When the event fires, it triggers a job. That job references a library or a script to execute. If you have been in IT for more than a few years, you recognize this immediately. It is an event-driven automation pattern. Cron jobs. Callable libraries. Event listeners. The plumbing that has powered enterprise IT since before most of today’s AI startups had a business plan.
The difference — and I want to be fair here, because there is a real difference — is the decision layer. In a traditional cron job, the logic is hardcoded. The script runs the same way every time. In my setup, when a message comes in, an AI model decides how to respond. It picks the right tool, generates the output, and handles situations the original developer did not explicitly program for. That is genuinely new. That runtime decision-making is the innovation.
But it is one layer on top of a well-understood stack. It is not a revolution in architecture. It is an evolution in who — or what — gets to write and modify the logic.
Why This Matters for Defense
I have spent 26 years watching the Department of Defense struggle with IT, and the pattern is always the same. A new technology trend emerges. Industry repackages existing capabilities under the new label. The DoD buys it at a premium because leadership lacks the technical depth to challenge the marketing. And we end up with another generation of systems that cost more than they should and deliver less than they promised.
We saw it with Cloud. We saw it with DevSecOps. And we are about to see it with AI agents.
When a vendor walks into a program office and pitches an “autonomous AI agent” for mission planning or logistics, the senior leader in the room needs to understand what they are actually buying. In most cases, it is a workflow automation tool with an LLM in the loop — not a sentient system that independently plans and executes complex operations. The underlying architecture is event triggers, scheduled tasks, and API calls. The AI model provides flexible decision-making in between those steps.
That is not a criticism. That is actually a useful capability. But it is a $500,000 capability being sold at a $5 million price tag because no one in the room can decompose it into its parts.
The Real Innovation Is Access
Here is what I think the industry is missing while it chases the “agent” hype. The most significant change is not in the architecture. It is in the accessibility.
I built my OpenClaw system over a series of weekends. I have a background in web-based development and data architecture design, so I am not starting from zero. But what the AI model actually accelerated was not the coding itself. It was the ability to leverage what I already know about system-based design, asking the right questions in the right context, and stand up a secure environment. I understand how event-driven architectures work. I know how to decompose a workflow into triggers, logic, and execution. The LLM handled the implementation details — the specific syntax, library connections, and configuration files — while I focused on the design decisions and security posture.
That is the real disruption. Not agents. Access. Specifically, access that amplifies existing technical knowledge rather than replacing it.
For the DoD, this should be the headline — and it is a bigger deal than most people realize. One of the most persistent bottlenecks in defense IT is the dependency on software developers who hold the right clearances. There are never enough of them, they are expensive, and the programs that need them most are often the ones least able to attract them. What LLMs are doing is compressing that gap. A government civilian or service member with a technical background and a security clearance can now build workflow automations that previously required a contracted development team with cleared developers and a six-month timeline. The LLM eliminates the need for a dedicated software developer at a higher classification level by making workflows accessible to people who already understand the mission and environment — they need help with implementation.
The warfighter does not need to wait for a program of record to deliver an “AI agent.” The components already exist. The people with the clearances and the mission knowledge already exist. The LLM is the bridge between what they know and what they can now build.
So What Do We Do About It?
Any critic can complain about industry hype. The harder question is what to do with this understanding. Three things come to mind.
First, decompose before you buy. When a vendor pitches an AI agent, ask them to break it down. What is the event trigger? What is the decision logic? What libraries or APIs does it call? If they cannot answer those questions clearly, they either do not understand their own product or hope you will not ask. Either way, walk away.
Second, invest in technical literacy at the leadership level. The reason the hype works is that the decision-makers do not have the vocabulary to challenge it. You do not need every general officer to write Python. But they need to understand the difference between a cron job with an LLM and a genuinely autonomous system. Those are different capabilities with different risk profiles, and buying one when you think you are getting the other is how programs fail.
Third, empower cleared technical talent. The real opportunity is not in buying packaged “agent” solutions from prime contractors. It is in giving technically capable service members and government civilians — people who already hold clearances and understand system design — access to AI coding tools that let them build their own automations. The LLM handles the implementation. The cleared operator provides the mission context, the security requirements, and the architectural judgment. That combination is more powerful — and far cheaper — than hiring another team of cleared developers.
The Bottom Line
The AI agent is not a new machine. It is a new coat of paint on an engine that the IT industry has been running for decades, with one genuinely innovative component: an AI model that makes runtime decisions and lowers the barrier to building automations.
That is worth investing in. But it is not worth paying a premium for architecture that has existed since the first sysadmin wrote a cron job.
For those of us in the defense and technology space, the opportunity is not in buying the hype. It is about understanding the components well enough to build what we actually need—and empowering the people closest to the mission to do it themselves.
The bus is leaving on this one. The question is whether we are going to ride it or get sold a ticket to watch it drive away.
——-
Pax ab Space
Clinton Austin is a Senior Business Development Director for GDIT who covers the U.S. Air Force, the U.S. Space Force, and the Missile Defense Agency.
The views expressed are those of the author and do not necessarily reflect the official policy or position of General Dynamics Information Technology.
March 14, 2026
Comments are closed.