DeepPolicy Contact
← Insights
MethodologyThroughput

Why most policy analysis arrives too late to matter

The constraint on serious fiscal analysis stopped being intellectual a decade ago. It is now operational — and it is the reason your whitepaper landed three months after the news cycle had moved on.

Balázs Fehér 6 min read

The standard production cycle for a serious fiscal whitepaper has not meaningfully changed since the 1980s. A draft budget arrives. A research director scopes the analysis. Three to eight analysts are pulled off other commitments. They split the document by chapter, build spreadsheets, debate methodology, write sections, integrate, edit, fact-check, lay out, translate, and finally publish — eight to twelve weeks later, depending on how generous one is being about the definition of finally.

By then, the document being analysed has been voted on, the press cycle has moved through three subsequent crises, and the institutions that commissioned the analysis are quietly wondering whether next year they should commission anything at all.

This is not a failure of intelligence. It is a failure of throughput.

The constraint nobody names

The interesting thing about this constraint is how rarely it is named explicitly. Inside a research department, the slowness of the process is treated as a fact of nature — the way the weather is a fact of nature. People work around it. They start earlier. They commit more analysts. They simplify the scope. They cut the demographic appendix. They publish the executive summary first and the full report “in due course” (a phrase which, in policy publishing, means either next quarter or never).

What they do not do is question whether the constraint itself is necessary.

This is partly cultural. The slow pace is read as a feature: it implies care, deliberation, multiple sets of eyes. And in a world where the alternative was a faster but visibly worse analysis, it would be a feature. The choice was real.

It is no longer real.

What changed, precisely

Three things changed in roughly the last three years, and the combination is what makes the old throughput constraint optional rather than inevitable.

First, language models became reliable enough at structured analytical work that it is no longer interesting to argue about whether they can read a budget chapter and classify line items into a defined taxonomy. They can. The interesting questions have moved on to which taxonomy, with what validation, against what deterministic ground truth.

Second, the orchestration layer matured. It became routine to run twenty specialised agents in parallel against twenty chapters of the same document, with structured outputs that downstream stages could reason over. This is not the same thing as “put the document into ChatGPT.” It is a different shape of system, and it produces a different shape of output.

Third — and this is the part that gets least attention — the cost of being wrong about a number dropped sharply once it became standard to compute aggregates deterministically in a verification layer rather than letting them flow through the language model. The model reasons; arithmetic does the arithmetic. Each plays to its strength.

The combined effect is not a speedup of 2× or 3×. It is closer to 20×. A serious fiscal whitepaper that previously took eight weeks now takes six days. The first time you watch this happen on a real document, your instinct is to assume you have lost something — that the speed must have come from skipping a step. It has not. The steps are all still there. They are running in parallel against a different substrate.

What this changes for institutions

The institution that figures this out first inside a given country gets an asymmetric prize: it becomes the first publisher inside the news cycle of every major fiscal event. This is a much bigger deal than it sounds.

In the old model, the budget came out, three months passed, and then think-tank analysis arrived to a public that had already formed its impression and moved on. The think tank’s output served as historical record more than as input. Nobody at the time was waiting to read it.

In the new model, the analysis arrives the same week — the same week — as the budget itself. It enters the news cycle while the news cycle is still forming. It shapes the questions journalists ask in the next interview round. It supplies the costed counter-numbers that opposition spokespeople reach for in the next day’s rebuttal. It becomes the substrate of the public conversation rather than the post-mortem.

The institutions that operate at the old cadence are not, in this configuration, slow versions of the new institutions. They are different institutions, contributing to a different stage of the discourse. They are not in the conversation; they are providing material for the eventual academic literature about it.

This is a positional difference, not a quality difference. And like most positional differences in publishing, it tends to be self-reinforcing.

What this does not change

It is worth being precise about what the new throughput regime does not solve.

It does not change which analytical frame is correct. An Austrian-school read of a budget remains an Austrian-school read; a post-Keynesian read remains a post-Keynesian read; both are now produced faster, but their disagreements with each other are unchanged.

It does not change the editorial judgement involved in deciding which findings to lead with, which to bury, and which to leave out entirely. That work is still human, still contested, and still where most of the actual political signal lives.

It does not change the validation problem. A faster pipeline that hallucinates is worse than a slow pipeline that does not. Every output still needs to be defensible, line by line, against the source document. The audit trail is not optional; it is the deliverable.

What it changes is the cadence at which serious analysis can enter the public record. And in the publishing of policy ideas, cadence is much closer to power than the field has typically been willing to admit.

The institutions that figure this out first

A small number of institutions in each country will figure this out in the next eighteen months. They will not necessarily be the largest or the best-funded. They will be the ones whose research directors are willing to redesign their production cycle around the new constraint set, rather than treating the new tools as marginal speedups within the old one.

The rest will publish, eventually, in due course.

From DeepPolicy

Run the same methodology against your document.

DeepPolicy is the policy intelligence pipeline behind these notes. If you have a budget, a bill, or a regulatory consultation that needs this kind of treatment at scale, we should talk.

More from Insights