The Emperor’s New Agent

I spent last week­end rewiring my home AI gate­way — a self-host­ed sys­tem called Open­Claw that con­nects to my mes­sag­ing apps, routes requests to dif­fer­ent AI mod­els, and exe­cutes tasks on my behalf. After a few hours of con­fig­ur­ing event lis­ten­ers, set­ting up sched­uled jobs, and con­nect­ing callable libraries, I sat back. I real­ized some­thing that should both­er every­one in the defense and tech­nol­o­gy space.

I had just built what the indus­try is call­ing an “AI agent.” And the archi­tec­ture under­neath it was iden­ti­cal to what I was build­ing 20 years ago.

Same Engine, New Paint Job

Here is what my so-called agent actu­al­ly does. It sits in an envi­ron­ment, lis­ten­ing for an event — a Sig­nal mes­sage, a sched­uled timer, a file change. When the event fires, it trig­gers a job. That job ref­er­ences a library or a script to exe­cute. If you have been in IT for more than a few years, you rec­og­nize this imme­di­ate­ly. It is an event-dri­ven automa­tion pat­tern. Cron jobs. Callable libraries. Event lis­ten­ers. The plumb­ing that has pow­ered enter­prise IT since before most of today’s AI star­tups had a busi­ness plan.

The dif­fer­ence — and I want to be fair here, because there is a real dif­fer­ence — is the deci­sion lay­er. In a tra­di­tion­al cron job, the log­ic is hard­cod­ed. The script runs the same way every time. In my set­up, when a mes­sage comes in, an AI mod­el decides how to respond. It picks the right tool, gen­er­ates the out­put, and han­dles sit­u­a­tions the orig­i­nal devel­op­er did not explic­it­ly pro­gram for. That is gen­uine­ly new. That run­time deci­sion-mak­ing is the innovation.

But it is one lay­er on top of a well-under­stood stack. It is not a rev­o­lu­tion in archi­tec­ture. It is an evo­lu­tion in who — or what — gets to write and mod­i­fy the logic.

 Why This Matters for Defense

I have spent 26 years watch­ing the Depart­ment of Defense strug­gle with IT, and the pat­tern is always the same. A new tech­nol­o­gy trend emerges. Indus­try repack­ages exist­ing capa­bil­i­ties under the new label. The DoD buys it at a pre­mi­um because lead­er­ship lacks the tech­ni­cal depth to chal­lenge the mar­ket­ing. And we end up with anoth­er gen­er­a­tion of sys­tems that cost more than they should and deliv­er less than they promised.

We saw it with Cloud. We saw it with DevSec­Ops. And we are about to see it with AI agents.

When a ven­dor walks into a pro­gram office and pitch­es an “autonomous AI agent” for mis­sion plan­ning or logis­tics, the senior leader in the room needs to under­stand what they are actu­al­ly buy­ing. In most cas­es, it is a work­flow automa­tion tool with an LLM in the loop — not a sen­tient sys­tem that inde­pen­dent­ly plans and exe­cutes com­plex oper­a­tions. The under­ly­ing archi­tec­ture is event trig­gers, sched­uled tasks, and API calls. The AI mod­el pro­vides flex­i­ble deci­sion-mak­ing in between those steps.

That is not a crit­i­cism. That is actu­al­ly a use­ful capa­bil­i­ty. But it is a $500,000 capa­bil­i­ty being sold at a $5 mil­lion price tag because no one in the room can decom­pose it into its parts.

The Real Innovation Is Access

Here is what I think the indus­try is miss­ing while it chas­es the “agent” hype. The most sig­nif­i­cant change is not in the archi­tec­ture. It is in the accessibility.

I built my Open­Claw sys­tem over a series of week­ends. I have a back­ground in web-based devel­op­ment and data archi­tec­ture design, so I am not start­ing from zero. But what the AI mod­el actu­al­ly accel­er­at­ed was not the cod­ing itself. It was the abil­i­ty to lever­age what I already know about sys­tem-based design, ask­ing the right ques­tions in the right con­text, and stand up a secure envi­ron­ment. I under­stand how event-dri­ven archi­tec­tures work. I know how to decom­pose a work­flow into trig­gers, log­ic, and exe­cu­tion. The LLM han­dled the imple­men­ta­tion details — the spe­cif­ic syn­tax, library con­nec­tions, and con­fig­u­ra­tion files — while I focused on the design deci­sions and secu­ri­ty posture.

That is the real dis­rup­tion. Not agents. Access. Specif­i­cal­ly, access that ampli­fies exist­ing tech­ni­cal knowl­edge rather than replac­ing it.

For the DoD, this should be the head­line — and it is a big­ger deal than most peo­ple real­ize. One of the most per­sis­tent bot­tle­necks in defense IT is the depen­den­cy on soft­ware devel­op­ers who hold the right clear­ances. There are nev­er enough of them, they are expen­sive, and the pro­grams that need them most are often the ones least able to attract them. What LLMs are doing is com­press­ing that gap. A gov­ern­ment civil­ian or ser­vice mem­ber with a tech­ni­cal back­ground and a secu­ri­ty clear­ance can now build work­flow automa­tions that pre­vi­ous­ly required a con­tract­ed devel­op­ment team with cleared devel­op­ers and a six-month time­line. The LLM elim­i­nates the need for a ded­i­cat­ed soft­ware devel­op­er at a high­er clas­si­fi­ca­tion lev­el by mak­ing work­flows acces­si­ble to peo­ple who already under­stand the mis­sion and envi­ron­ment — they need help with implementation.

The warfight­er does not need to wait for a pro­gram of record to deliv­er an “AI agent.” The com­po­nents already exist. The peo­ple with the clear­ances and the mis­sion knowl­edge already exist. The LLM is the bridge between what they know and what they can now build.

So What Do We Do About It?

Any crit­ic can com­plain about indus­try hype. The hard­er ques­tion is what to do with this under­stand­ing. Three things come to mind.

First, decom­pose before you buy. When a ven­dor pitch­es an AI agent, ask them to break it down. What is the event trig­ger? What is the deci­sion log­ic? What libraries or APIs does it call? If they can­not answer those ques­tions clear­ly, they either do not under­stand their own prod­uct or hope you will not ask. Either way, walk away.

Sec­ond, invest in tech­ni­cal lit­er­a­cy at the lead­er­ship lev­el.  The rea­son the hype works is that the deci­sion-mak­ers do not have the vocab­u­lary to chal­lenge it. You do not need every gen­er­al offi­cer to write Python. But they need to under­stand the dif­fer­ence between a cron job with an LLM and a gen­uine­ly autonomous sys­tem. Those are dif­fer­ent capa­bil­i­ties with dif­fer­ent risk pro­files, and buy­ing one when you think you are get­ting the oth­er is how pro­grams fail.

Third, empow­er cleared tech­ni­cal tal­ent.  The real oppor­tu­ni­ty is not in buy­ing pack­aged “agent” solu­tions from prime con­trac­tors. It is in giv­ing tech­ni­cal­ly capa­ble ser­vice mem­bers and gov­ern­ment civil­ians — peo­ple who already hold clear­ances and under­stand sys­tem design — access to AI cod­ing tools that let them build their own automa­tions. The LLM han­dles the imple­men­ta­tion. The cleared oper­a­tor pro­vides the mis­sion con­text, the secu­ri­ty require­ments, and the archi­tec­tur­al judg­ment. That com­bi­na­tion is more pow­er­ful — and far cheap­er — than hir­ing anoth­er team of cleared developers.

The Bottom Line

The AI agent is not a new machine. It is a new coat of paint on an engine that the IT indus­try has been run­ning for decades, with one gen­uine­ly inno­v­a­tive com­po­nent: an AI mod­el that makes run­time deci­sions and low­ers the bar­ri­er to build­ing automations.

That is worth invest­ing in. But it is not worth pay­ing a pre­mi­um for archi­tec­ture that has exist­ed since the first sysad­min wrote a cron job.

For those of us in the defense and tech­nol­o­gy space, the oppor­tu­ni­ty is not in buy­ing the hype. It is about under­stand­ing the com­po­nents well enough to build what we actu­al­ly need—and empow­er­ing the peo­ple clos­est to the mis­sion to do it themselves.

The bus is leav­ing on this one. The ques­tion is whether we are going to ride it or get sold a tick­et to watch it dri­ve away.

——-

Pax ab Space

 

Clin­ton Austin is a Senior Busi­ness Devel­op­ment Direc­tor for GDIT who cov­ers the U.S. Air Force, the U.S. Space Force, and the Mis­sile Defense Agency.

 

The views expressed are those of the author and do not nec­es­sar­i­ly reflect the offi­cial pol­i­cy or posi­tion of Gen­er­al Dynam­ics Infor­ma­tion Technology.

 

March 14, 2026

Comments are closed.