When AI Meets Bureaucracy: The Risks of Automating Grant Decisions with ChatGPT
The Department of Government Efficiency (DOGE) didn’t just use ChatGPT to help with paperwork—they let it decide the fate of over $100 million in humanities grants. That’s not an exaggeration. According to The Verge, a federal judge found DOGE’s process for canceling National Endowment for the Humanities (NEH) grants “could not be more obvious” in its reliance on ChatGPT to flag and disqualify projects based solely on the presence of diversity, equity, and inclusion (DEI) content.
This wasn’t just an efficiency play gone awry. Automating high-stakes funding decisions with a large language model, especially with prompts as blunt as “is this about DEI?”, invites both legal and ethical catastrophe. DOGE’s shortcut traded away the nuance and scrutiny these grants demand for the illusion of algorithmic neutrality.
The ruling underscores a central tension in modern public administration: chasing speed and cost-cutting with AI while sidelining the kind of complex, context-sensitive human judgment the law still requires. The promise of algorithmic objectivity is seductive, but as this case demonstrates, it can collapse into arbitrary discrimination when wielded without discipline or oversight.
Crunching the Numbers: The Scale and Impact of the $100 Million Grant Cancellation
The sheer volume—over $100 million in grant cancellations—signals just how much power DOGE handed over to ChatGPT. This sum represents a substantial segment of NEH’s annual grantmaking, though the source doesn’t break down the precise distribution among organizations or project types.
What’s clear: these are not trivial sums lost to academic rounding errors. NEH grants typically fund everything from historical research to community programming, often bridging the gap for organizations that can’t survive on ticket sales or donations alone. The abrupt cancellation, based on algorithmic filtering of DEI language, would have left a wide swath of recipients in limbo—potentially mid-project, with sunk costs and no recourse.
Analysis: We don’t have a list of affected organizations or a geographic map of the damage, but the scale alone suggests a shock to the humanities sector. For many recipients, especially smaller or first-time grantees, NEH funding is existential. DOGE’s experiment with automated grant culling didn’t just “streamline” government; it threatened to gut years of scholarly and cultural work with the push of an autocomplete button.
Legal Red Flags: How DOGE’s Use of ChatGPT Violated Constitutional Protections
Judge Colleen McMahon’s 143-page ruling doesn’t mince words. DOGE’s process was not just sloppy; it was unconstitutional. The court found that DOGE used ChatGPT to scan for “particular, protected characteristics” and denied continued funding on that basis. In plain terms: the presence of DEI-related language—often a proxy for race, gender, or other protected traits—became an automatic disqualifier.
That’s a direct hit against the Fourteenth Amendment’s equal protection clauses, which forbid government actors from discriminating based on protected characteristics. The ruling makes clear that automating bias does not absolve the state of responsibility. In fact, by delegating sensitive decisions to a tool incapable of legal reasoning or cultural context, DOGE failed to meet even the basic standards of administrative due process.
AI can summarize, sort, and generate, but it cannot—and should not—substitute for the legal requirement to consider projects on their merits. The court’s message: government can’t hide behind code to avoid constitutional scrutiny.
Diverse Voices on AI in Government: Stakeholder Reactions to the Ruling
The humanities groups who sued DOGE saw the AI-driven grant cancellations as a clear sign of bias, not just a technical glitch. Their core argument, now vindicated by the court, was that an algorithmic filter for DEI content was a proxy for discrimination—and a fast track to silencing marginalized voices in the humanities.
DOGE, for its part, tried to defend the program as an efficiency measure. The department saw ChatGPT as a way to quickly “identify” grants for review in a politically fraught area. But the court wasn’t interested in efficiency if it came at the cost of constitutional rights.
MLXIO analysis: The debate here is not about whether AI belongs in the public sector, but about where and how it can be used. Stakeholders on both sides now face a reckoning—either build tools that respect legal boundaries, or risk judicial intervention that halts automation projects entirely.
Lessons from History: Comparing AI Missteps in Public Sector Automation
DOGE’s debacle isn’t the first AI misfire in government, but it’s among the most high-profile to be shot down in federal court. While the source does not cite specific precedents, the pattern is familiar: automated systems, when deployed without oversight or context, amplify existing biases and sidestep the legal guardrails that protect citizens from arbitrary state action.
Historically, rushed automation in government has led to wrongful denials of benefits, unaccountable black-box decisions, and, now, unconstitutional grant cancellations. The DOGE case is a stark reminder: technology doesn’t erase the need for human judgment—it raises the stakes for getting it right.
What This Means for Public Funding and AI Governance in the Humanities Sector
The ruling throws a wrench into any plan to automate public funding decisions with LLMs—or any AI model not explicitly trained for legal nuance. For the humanities sector, the message is blunt: grants can’t be filtered by buzzword or keyword, especially if those terms are linked to protected classes.
Policy changes are inevitable. Agencies will have to build in transparency, legal review, and, likely, human intervention at every step. Trust between funders and grantees has taken a hit—and restoring it will require more than a promise to “fix the prompt.”
MLXIO inference: Expect a chilling effect on AI adoption, at least for anything touching civil rights or protected categories. Humanities organizations, once burned, will demand proof that future tools serve fairness, not expedience.
Predicting the Future: How AI Regulation and Government Efficiency Will Evolve Post-Ruling
This case sets the stage for a new era of AI governance in the public sector. Regulatory reforms may soon require agencies to document not just what their algorithms do, but how they avoid discriminatory outcomes. “Black box” automation, especially for funding or benefits, is likely on borrowed time.
On the technical side, agencies may invest in explainable AI or hybrid models that keep humans in the loop. Full automation—especially for anything touching on protected characteristics—is now a legal and political minefield.
The next big watch item: Will other agencies quietly abandon or overhaul their own AI-driven reviews? Or will there be a wave of new lawsuits, as advocacy groups test the limits of this ruling? Evidence of real process change—or more “AI-washing” of old practices—will reveal whether public sector efficiency can ever coexist with constitutional fairness.
Impact Analysis
- Automating grant decisions with ChatGPT led to $100 million in cancelled funding, impacting humanities organizations nationwide.
- The judge’s ruling highlights legal risks when AI replaces nuanced human judgment in public administration.
- This case raises broader ethical questions about algorithmic bias and oversight in government decision-making.



