
From 'Self-Service BI' to 'Self-Serve Answers': What's the Operating Model?
The shift from dashboards to AI-assisted answers needs new roles, processes, and ways of working
For the best part of a decade, the promise of self-service BI has gone something like this: give people a dashboard, connect it to the data, and they'll find their own answers. In practice, that's not what happened. What happened was a ticket queue.
Someone in policy or operations has a question. They don't know SQL. They can't find the right table. The dashboard doesn't slice the way they need. So they log a request with the data team. The data team is already juggling twenty other requests. The question sits in a queue for a week, maybe two. By the time the answer comes back, the decision has already been made on gut feel, or the moment has passed entirely.
This pattern is everywhere, and if you've worked in or around a New Zealand government agency or a mid-sized organisation, you'll recognise it immediately. Self-service BI gave people charts. It didn't give them answers.
Something different is now possible. Natural language querying, AI assistants sitting on top of governed data, and semantic layers that define what metrics actually mean are changing the game. But the technology isn't the hard part. The hard part is the operating model: the roles, the processes, and the ways of working that need to shift if you want to move from "we have dashboards" to "people can actually ask questions and get trustworthy answers."
The old model: request, wait, receive
Most data teams today still operate like an internal service desk. Business users submit requests ("Can I get a report on X?", "Can you add a filter for Y?"), and the data team works through them in roughly the order they arrive. There might be a Jira board or a shared inbox, but the pattern is the same: a queue of tickets, processed one at a time.
This model made sense when BI was the domain of specialists. Building a dashboard in the early days of Tableau or Power BI required real technical skill, and the tools were expensive. Centralising that work was reasonable.
But the model has well-known problems. It creates bottlenecks. It turns data teams into order-takers rather than strategic partners. It frustrates business users who feel they can't get what they need quickly enough. And it means the data team spends most of its time on low-value ad-hoc requests instead of building things that scale.
The shift we're seeing, and the one we think matters, is from ticket queues to product backlogs. Instead of treating every request as a one-off item to be fulfilled, the data team starts thinking of its outputs as products: governed datasets, semantic models, NLQ interfaces, and AI-assisted tools that serve many users repeatedly. Requests don't disappear, but they get triaged against a backlog that's prioritised by business value, not just who shouted loudest or who logged the ticket first.
New roles for a new model
Moving from dashboards to AI-assisted answers doesn't just need new technology. It needs people in clearly defined roles who own different parts of the system. Here are four that we think matter most.
Semantic product owner. If you're going to let people query data using natural language, something needs to sit between the raw tables and the questions people ask. That something is usually called a semantic layer, and it's where you define what your metrics actually mean. "Revenue" sounds simple until you realise that finance counts it one way, operations counts it another, and the CEO's slide deck uses a third definition. A semantic layer locks down a single, governed definition and makes it available everywhere. The semantic product owner is the person responsible for this layer. They own the backlog of metric definitions, dimensional models, and business rules.
Data steward. Data stewardship isn't new, but the role is changing. In the old model, a data steward was often focused on cataloguing: documenting what tables exist, who owns them, what the columns mean. In the new model, stewardship becomes more operational. A data steward is responsible for data quality, freshness, and fitness for purpose. They make sure the data feeding the semantic layer is accurate and up to date.
NLQ product manager. Natural language querying is the front door to self-serve answers. It's the interface where a policy analyst types "How many workplace injuries were reported in the Canterbury region in the last quarter?" and gets an answer, without writing SQL or navigating a dashboard. The NLQ product manager owns this experience, managing the backlog of improvements and monitoring how people are actually using it.
AI assurance lead. This is the role that barely existed two years ago but is quickly becoming essential, especially in the public sector. New Zealand's Public Service AI Framework sets clear expectations: agencies need to use AI in ways that are safe, transparent, and accountable. The AI assurance lead is the person who makes this real inside an organisation. They design and maintain the governance gates that AI outputs pass through before they reach end users.
From queues to backlogs: what changes day to day
The shift from ticket queues to product backlogs isn't just a change in tooling. It changes how work flows through the data team.
Requests become feedback, not orders. When a business user asks a question that the NLQ system can't answer well, that's a signal, not a ticket. It goes on the backlog as an improvement to the semantic model or the query grounding, not as a one-off report to build.
Sprint cadences replace queue processing. The data team works in short cycles, picking the highest-value items from the backlog each sprint. Stakeholders can see what's planned, what's in progress, and what's coming next, instead of wondering where their request sits in an invisible queue.
Reuse beats bespoke. Every improvement to the semantic layer or knowledge graph benefits all users, not just the one who raised the request. Over time, the system gets smarter and more capable, reducing the total volume of requests rather than just processing them faster.
Governance is built in, not bolted on. Because the AI assurance lead is part of the product team, governance checks happen during development, not after deployment. Risk assessment is a standing item, not an afterthought.
Why this matters now in New Zealand
The timing is right for this shift, particularly in the public sector. New Zealand's AI Strategy positions AI investment as a national capability shift, with $213 million allocated for tuition and training subsidies and $64 million for STEM and priority areas in Budget 2025. The Public Service AI Framework provides the governance guardrails. The GCDO's work programme is actively building out assurance models and toolkits.
Meanwhile, organisations are sitting on years of accumulated data in platforms like Azure, Snowflake, and Databricks, but struggling to get value from it at the speed their people need. The bottleneck isn't data. It's access, trust, and the operating model around it.
Here at DataSing, we believe the answer isn't another dashboard. It's a governed, AI-ready data layer with clear ownership, roles that match the new reality, and the right architecture underneath. That's what we're building with our clients, and it's what our R&D in knowledge graphs and ontology-driven NLQ is designed to accelerate.
Getting started
If you're thinking about making this shift, here's where we'd suggest you begin:
1. Audit your current request flow. How many data requests are in your queue right now? How long do they take to fulfil? What percentage of them are variations on the same question?
2. Define your first semantic products. Pick two or three key metrics that cause the most confusion or inconsistency across your organisation. Lock down the definitions. Make them available in a governed way.
3. Stand up the roles. You don't need to hire four new people tomorrow. Start by assigning the responsibilities, even if one person wears multiple hats initially. The point is to have clear ownership.
4. Run a small NLQ pilot. Pick a contained dataset and a willing team. Let them ask questions in natural language. See what works, what breaks, and what needs better grounding.
5. Align with your governance framework. If you're in the public sector, map your approach to the Public Service AI Framework. If you're in the private sector, the NIST AI RMF is a solid reference point.
The shift from self-service BI to self-serve answers isn't a technology upgrade. It's an operating model change. And like all operating model changes, it starts with clarity about who does what, and why.
Written by
DataSing Team
Data Platform Specialists