Skip to main content
Document Intelligence

Submission Analysis

6,000 submissions. 8 weeks to report. Every voice heard, every conclusion defensible.

Submission Analysis
The Business Problem

Consultations are broken

Public consultation is a cornerstone of democratic decision-making. But the process of actually analysing what the public said? It's stuck in the 1990s.

The Policy Analyst
"I have 6,000 submissions, 8 weeks to report, and I'm supposed to read every single one?"

You're doing your best with sampling. You're coding themes into a spreadsheet. You know you're missing things. You're worried about the ones you haven't read.

What if the submission you skipped is the one that ends up in the minister's inbox as a complaint?
The Team Lead
"Why do my three analysts code the same submission three different ways?"

You've tried coding guides. You've tried calibration sessions. But when volume hits, consistency drops. And you can't explain why "housing affordability" appears in one analyst's themes but not another's.

When the report goes to cabinet, will the analysis hold up under scrutiny?
The Minister's Advisor
"What are people actually saying? Not the themes, but the sentiment, the stories, the things that matter."

You get a 40-page report with charts. "64% mentioned environmental concerns." But you can't tell the minister what those concerns actually are, or why people are angry.

The minister asks a journalist's question, and you don't have the answer.
The Communications Lead
"How do we prove we listened to everyone fairly?"

Interest groups are already drafting their press releases. "Government ignored rural voices." "Young people's views weren't represented." You need to show (with evidence) that every voice was heard.

Tomorrow's headline: "Consultation a farce, say critics."

It's not about reading faster

The problem isn't that your analysts are slow. The problem is that manual analysis doesn't scale, and when it doesn't scale, you make trade-offs that undermine the entire purpose of consultation.

You sample. You skim. You miss things. And you can't prove you didn't.

23%
Average sampling rate

When volume exceeds capacity, agencies sample. Most consultations only deeply analyse a fraction of submissions.

47%
Inter-rater disagreement

Studies show analysts frequently disagree on theme classification. Under time pressure, this gets worse.

$180k+
Typical analysis cost

Large consultations require external support. Budget constraints force compromises on rigour.

14 weeks
Average turnaround

From consultation close to ministerial briefing. Policy windows close while analysis continues.

The Solution

AI that reads everything, so your analysts can focus on meaning

We don't replace human analysts. We eliminate the mechanical work that makes analysis slow, expensive, and inconsistent. Your experts focus on interpretation. The system handles extraction.

Complete extraction

Every submission is read in full. Themes, entities, sentiment, and specific claims are extracted systematically. No sampling required.

The difference: When the minister asks about a specific topic, you can say exactly how many submissions mentioned it, what they said, and show the evidence, even if it was mentioned in only 3 submissions out of 6,000.

Evidence chains

Every conclusion links back to source documents. Every theme summary includes citations. Every claim is traceable to the exact paragraph that supports it.

The difference: When an interest group challenges your methodology, you can export the complete analysis chain (from raw submission to final report) in minutes.
How It Works

From submissions to insights

A systematic process designed for auditability, consistency, and speed.

01

Ingest everything

Every submission. Every format. Every channel. PDFs, Word docs, emails, handwritten letters (OCR'd), online form responses, social media mentions. Nothing is excluded.

We normalise content, extract metadata, and assign stable identifiers. Each submission becomes a traceable unit. You can always find your way back to the original.

Complete coverage. No sampling. No "we didn't have time to read that one."
02

Extract themes and entities

AI identifies what each submission is actually saying. Themes, sub-themes, entities (people, places, organisations), entity relationships, sentiment, and specific claims.

Unlike keyword matching, we understand context. "I support the proposal" and "This proposal will destroy our community" both mention "the proposal", but they mean very different things.

Consistent classification. The same logic applied to submission 1 and submission 6,000.
03

Build evidence chains

Every theme we identify links back to the specific submissions that support it. Every summary we generate comes with citations.

When we say "247 submissions raised concerns about water quality," you can click through and see exactly which 247 submissions, what they said, and where in the document they said it.

Audit-ready outputs. You can defend every conclusion with evidence.
04

Surface what matters

Beyond the majority themes, we identify emerging issues, unexpected patterns, and minority voices that might otherwise be lost.

The 12 submissions from a specific rural community raising a concern no one else mentioned? We flag it. The unusual coalition of interests agreeing on something? We surface it.

No voice lost in the noise. Important signals detected even at low volume.
05

Human review and refinement

Your analysts review AI outputs, validate classifications, add expert context, and refine the analysis. The system learns from every correction.

This isn't about replacing human judgment. It's about focusing human judgment where it matters. Analysts spend time on interpretation, not data entry.

Expert insight where it counts. Mechanical processing automated.
06

Generate decision-ready outputs

Ministerial briefings, public summary reports, detailed appendices, structured data exports. Ready for cabinet, ready for publication, ready for OIA.

Outputs include methodology documentation, representativeness analysis, and full citation chains. When someone asks "how did you reach that conclusion?", you have the answer.

From raw submissions to cabinet paper in weeks, not months.
The Outcome

What changes

10x
Faster

What took 12 weeks with manual analysis now takes 2 weeks. Same rigour, fraction of the time.

A recent government consultation processed 8,400 submissions in 11 days, including human review and ministerial briefing.
100%
Coverage

Every submission analysed. No sampling, no shortcuts. The 50-page legal brief and the handwritten postcard both get full attention.

When asked "did you read my submission?", the answer is always yes, and you can prove it.
0
Inconsistencies

Same classification logic applied to every submission. Theme A in Auckland is Theme A in Southland.

Removes the variability of human coding under time pressure. Calibration happens once, in the system design.
Traceability

Every conclusion traces back to source documents. Citations, quotes, document references: all the way down.

OIA-ready from day one. "Here's our methodology, here's our evidence, here's how we reached that conclusion."
Honest Answers

The questions you should ask

The concern

"AI can't understand the nuance of public submissions."

Our answer

You're right that AI shouldn't make policy decisions. But it can reliably identify that 247 submissions mentioned water quality, extract what they said, and surface the quotes that matter. Your analysts still interpret meaning. They just don't spend weeks doing data entry first.

The concern

"We need human analysts to ensure fairness."

Our answer

Human analysts are essential for interpretation, context, and judgment. What they shouldn't be doing is the mechanical work of reading 6,000 submissions and entering themes into a spreadsheet. That's where inconsistency creeps in, where fatigue causes mistakes, where voices get lost.

The concern

"What if the AI misclassifies something important?"

Our answer

Every AI classification is reviewable. Analysts can see exactly why a submission was tagged a certain way, override incorrect classifications, and improve the system. More importantly, human analysts also misclassify things when they're processing 50 submissions a day under deadline pressure.

The concern

"Our consultation is different. It's complex."

Our answer

Complex consultations benefit most from systematic analysis. The more submissions, the more themes, the more nuance, the harder it is for manual processes to maintain consistency. We build domain-specific understanding into every deployment.

See it work on your submissions

Bring a sample from your next consultation. We'll show you exactly how the analysis would work: themes extracted, evidence chains built, ready for review.