Elon Musk’s so-called Division of Authorities Effectivity (DOGE) operates on a core underlying assumption: The USA ought to be run like a startup. To date, that has principally meant chaotic firings and an eagerness to steamroll laws. However no pitch deck in 2025 is full with out an overdose of synthetic intelligence, and DOGE is not any completely different.
AI itself doesn’t reflexively deserve pitchforks. It has real makes use of and might create real efficiencies. It’s not inherently untoward to introduce AI right into a workflow, particularly for those who’re conscious of and in a position to handle round its limitations. It’s not clear, although, that DOGE has embraced any of that nuance. If in case you have a hammer, every part appears like a nail; when you have probably the most entry to probably the most delicate information within the nation, every part appears like an enter.
Wherever DOGE has gone, AI has been in tow. Given the opacity of the group, loads stays unknown about how precisely it’s getting used and the place. However two revelations this week present simply how intensive—and probably misguided—DOGE’s AI aspirations are.
On the Division of Housing and City Growth, a university undergrad has been tasked with utilizing AI to search out the place HUD laws could transcend the strictest interpretation of underlying legal guidelines. (Businesses have historically had broad interpretive authority when laws is obscure, though the Supreme Courtroom not too long ago shifted that energy to the judicial department.) It is a job that truly makes some sense for AI, which may synthesize data from giant paperwork far quicker than a human might. There’s some danger of hallucination—extra particularly, of the mannequin spitting out citations that don’t in reality exist—however a human must approve these suggestions regardless. That is, on one degree, what generative AI is definitely fairly good at proper now: doing tedious work in a scientific means.
There’s one thing pernicious, although, in asking an AI mannequin to assist dismantle the executive state. (Past the actual fact of it; your mileage will range there relying on whether or not you assume low-income housing is a societal good otherwise you’re extra of a Not in Any Yard sort.) AI doesn’t truly “know” something about laws or whether or not or not they comport with the strictest attainable studying of statutes, one thing that even extremely skilled legal professionals will disagree on. It must be fed a immediate detailing what to search for, which implies you can’t solely work the refs however write the rulebook for them. Additionally it is exceptionally wanting to please, to the purpose that it’ll confidently make stuff up moderately than decline to reply.
If nothing else, it’s the shortest path to a maximalist gutting of a serious company’s authority, with the possibility of scattered bullshit thrown in for good measure.
At the least it’s an comprehensible use case. The identical can’t be stated for one more AI effort related to DOGE. As WIRED reported Friday, an early DOGE recruiter is as soon as once more in search of engineers, this time to “design benchmarks and deploy AI brokers throughout dwell workflows in federal companies.” His goal is to remove tens of 1000’s of presidency positions, changing them with agentic AI and “liberating up” employees for ostensibly “greater affect” duties.