Submission Analysis
6,000 submissions. 8 weeks to report. Every voice heard, every conclusion defensible.
“Give every man thy ear, but few thy voice.

Consultations are broken
Public consultation is a cornerstone of democratic decision-making. But the process of actually analysing what the public said? It's stuck in the 1990s.
"I have 6,000 submissions, 8 weeks to report, and I'm supposed to read every single one?"
You're doing your best with sampling. You're coding themes into a spreadsheet. You know you're missing things. You're worried about the ones you haven't read.
"Why do my three analysts code the same submission three different ways?"
You've tried coding guides. You've tried calibration sessions. But when volume hits, consistency drops. And you can't explain why "housing affordability" appears in one analyst's themes but not another's.
"What are people actually saying? Not the themes, but the sentiment, the stories, the things that matter."
You get a 40-page report with charts. "64% mentioned environmental concerns." But you can't tell the minister what those concerns actually are, or why people are angry.
"How do we prove we listened to everyone fairly?"
Interest groups are already drafting their press releases. "Government ignored rural voices." "Young people's views weren't represented." You need to show (with evidence) that every voice was heard.

It's not about reading faster
The problem isn't that your analysts are slow. The problem is that manual analysis doesn't scale, and when it doesn't scale, you make trade-offs that undermine the entire purpose of consultation.
You sample. You skim. You miss things. And you can't prove you didn't.
When volume exceeds capacity, agencies sample. Most consultations only deeply analyse a fraction of submissions.
Studies show analysts frequently disagree on theme classification. Under time pressure, this gets worse.
Large consultations require external support. Budget constraints force compromises on rigour.
From consultation close to ministerial briefing. Policy windows close while analysis continues.
AI that reads everything, so your analysts can focus on meaning
We don't replace human analysts. We eliminate the mechanical work that makes analysis slow, expensive, and inconsistent. Your experts focus on interpretation. The system handles extraction.
Complete extraction
Every submission is read in full. Themes, entities, sentiment, and specific claims are extracted systematically. No sampling required.
Evidence chains
Every conclusion links back to source documents. Every theme summary includes citations. Every claim is traceable to the exact paragraph that supports it.

From submissions to insights
A systematic process designed for auditability, consistency, and speed.
Ingest everything
Every submission. Every format. Every channel. PDFs, Word docs, emails, handwritten letters (OCR'd), online form responses, social media mentions. Nothing is excluded.
We normalise content, extract metadata, and assign stable identifiers. Each submission becomes a traceable unit. You can always find your way back to the original.
Extract themes and entities
AI identifies what each submission is actually saying. Themes, sub-themes, entities (people, places, organisations), entity relationships, sentiment, and specific claims.
Unlike keyword matching, we understand context. "I support the proposal" and "This proposal will destroy our community" both mention "the proposal", but they mean very different things.
Build evidence chains
Every theme we identify links back to the specific submissions that support it. Every summary we generate comes with citations.
When we say "247 submissions raised concerns about water quality," you can click through and see exactly which 247 submissions, what they said, and where in the document they said it.
Surface what matters
Beyond the majority themes, we identify emerging issues, unexpected patterns, and minority voices that might otherwise be lost.
The 12 submissions from a specific rural community raising a concern no one else mentioned? We flag it. The unusual coalition of interests agreeing on something? We surface it.
Human review and refinement
Your analysts review AI outputs, validate classifications, add expert context, and refine the analysis. The system learns from every correction.
This isn't about replacing human judgment. It's about focusing human judgment where it matters. Analysts spend time on interpretation, not data entry.
Generate decision-ready outputs
Ministerial briefings, public summary reports, detailed appendices, structured data exports. Ready for cabinet, ready for publication, ready for OIA.
Outputs include methodology documentation, representativeness analysis, and full citation chains. When someone asks "how did you reach that conclusion?", you have the answer.
What changes
What took 12 weeks with manual analysis now takes 2 weeks. Same rigour, fraction of the time.
Every submission analysed. No sampling, no shortcuts. The 50-page legal brief and the handwritten postcard both get full attention.
Same classification logic applied to every submission. Theme A in Auckland is Theme A in Southland.
Every conclusion traces back to source documents. Citations, quotes, document references: all the way down.
The questions you should ask
"AI can't understand the nuance of public submissions."
You're right that AI shouldn't make policy decisions. But it can reliably identify that 247 submissions mentioned water quality, extract what they said, and surface the quotes that matter. Your analysts still interpret meaning. They just don't spend weeks doing data entry first.
"We need human analysts to ensure fairness."
Human analysts are essential for interpretation, context, and judgment. What they shouldn't be doing is the mechanical work of reading 6,000 submissions and entering themes into a spreadsheet. That's where inconsistency creeps in, where fatigue causes mistakes, where voices get lost.
"What if the AI misclassifies something important?"
Every AI classification is reviewable. Analysts can see exactly why a submission was tagged a certain way, override incorrect classifications, and improve the system. More importantly, human analysts also misclassify things when they're processing 50 submissions a day under deadline pressure.
"Our consultation is different. It's complex."
Complex consultations benefit most from systematic analysis. The more submissions, the more themes, the more nuance, the harder it is for manual processes to maintain consistency. We build domain-specific understanding into every deployment.
Related capabilities
Submission analysis is powered by our core AI platform.
