
How to Start Upskilling Your Organisation in AI (Without the Hype)
A practical approach to building AI capability across your workforce, based on what actually works
AI adoption across New Zealand has nearly doubled since 2023, with over 80% of organisations now using AI in some form. Yet only 24% of New Zealanders have received any formal AI training, and just 36% feel they have the skills to use AI properly. That's a big gap, and it's one that most Kiwi businesses are going to have to close if they want to keep up.
Here at DataSing, we work with organisations across New Zealand's public and private sectors, and we see the same pattern again and again: leadership knows AI matters, but nobody is quite sure where to start when it comes to building the skills to actually use it well. The good news is that you don't need a massive budget or a team of PhDs. You need a plan, some structure, and a willingness to learn by doing.
1. Assess, define, and plan
Before you spend a dollar on training, you need to know where you're starting from.
Audit your current skills and gaps. Survey your workforce to understand how familiar people actually are with AI. Not just "have you heard of ChatGPT?" but more like: How confident are you using AI tools day to day? Where are you spending time on repetitive tasks that could be done differently? What worries you about AI? The answers will surprise you, and they'll tell you where to focus.
One useful reference point here is the SFIA framework (Skills Framework for the Information Age), which was updated in its latest version to include a full set of AI and machine learning skills. SFIA breaks AI competencies into six areas: foundations, education and automation, system development, security and compliance, implementation and project management, and organisational management. It's free to use and gives you a structured way to map roles against skills, rather than guessing.
Tie everything to business goals. Generic "AI literacy" programmes tend to fizzle out. Instead, define specific, measurable outcomes. For example: reducing the time your team spends on monthly reporting by 40%, or cutting manual data entry by half. When people can see the link between learning a new tool and making their own work easier, they'll engage with the training.
Pick two or three quick wins. Don't try to boil the ocean. Identify a small number of use cases where AI can deliver visible value quickly. Maybe it's automating meeting summaries, or using Copilot to draft first versions of standard documents. Early wins build momentum and give sceptics something concrete to look at.
2. Tailor the training to the role
One-size-fits-all training doesn't work. A finance manager and a software developer need very different things from AI upskilling. Here's how we think about it:
Executives and senior leaders need to understand AI strategy, risk, compliance, and return on investment. They don't need to know how to build a model, but they do need to ask the right questions and make informed decisions about where to invest.
Managers and team leads need to spot AI opportunities within their teams and manage AI-assisted workflows. They're the ones who will identify where AI adds value in day-to-day operations, so they need enough hands-on experience to understand what's realistic and what isn't.
Domain experts (finance, HR, marketing, operations) need practical skills with AI tools that are relevant to their function. Think AI-assisted data analysis, content drafting, or predictive modelling using tools they already have access to.
Technical staff (developers, data engineers, IT) need deeper skills around model development, data engineering, MLOps, and integration. This is where formal training and certifications tend to have the most impact.
Everyone needs a baseline. That means understanding what AI is and isn't, the basics of prompt engineering, data privacy and ethics, and your organisation's policies on AI use. This baseline matters more than people think. MBIE research found that while 97% of New Zealand workers had heard of AI, only 34% could clearly explain what it actually is.
3. Learn by doing
The biggest mistake we see is treating AI upskilling as a classroom exercise. People learn AI by using it, not by watching slides about it.
Create safe spaces to experiment. Give people access to AI tools (ChatGPT, Copilot, Claude, or whatever fits your environment) in a sandbox where they can try things without worrying about breaking anything or breaching policy. Let them play. Some of the best use cases we've seen at client sites came from people just having a go.
Embed AI into actual work. Encourage people to use AI for real tasks: summarising meeting notes, drafting emails, analysing spreadsheet data, writing first drafts of reports. The goal is to make AI a normal part of how people work, not a separate thing they do on a Friday afternoon.
Use microlearning. Fifteen to thirty minute modules that people can fit between meetings and deadlines will get more traction than a full-day workshop. Short, practical, and frequent beats long and theoretical every time.
4. Build a culture around it
Technology adoption is really a people problem. If you get the culture right, the skills follow.
Find your AI champions. Every organisation has early adopters who are already experimenting. Find them, give them a platform, and let them share what they've learned. Peer-to-peer learning is more effective than top-down training for this kind of thing. Some of our clients run regular informal sessions (one team calls them "AI Vibe Hours") where people demo what they've been trying and share tips.
Reward the learning. Digital badges, certifications, shout-outs in team meetings, whatever works for your culture. The point is to signal that this matters and that people who invest time in it are valued for doing so.
Talk honestly about what AI means for jobs. This is the elephant in the room, and if you don't address it, people will fill the silence with their worst fears. Be upfront: AI is best at handling repetitive, time-consuming tasks, freeing people to focus on the work that actually needs human judgement, creativity, and relationship skills. Research from NewZealand.AI found that 62% of New Zealand businesses report AI is creating new career opportunities, not fewer. And 96% of Kiwi workers believe AI will create new forms of economic value rather than just eliminate jobs. Share these numbers. Have the conversation.
5. Measure what matters and keep iterating
You wouldn't launch a product without tracking whether it's working. The same goes for an upskilling programme.
The Kirkpatrick model is a useful framework here. It works across four levels: did people enjoy the training (satisfaction), did they actually learn something (competency gain), are they doing things differently as a result (behaviour change), and is the business seeing results (outcomes). Most organisations only measure the first two. The real value shows up in levels three and four.
Set a rhythm for review. AI moves fast, and your skills map and training content will need updating every three to six months. What was cutting edge in January may be table stakes by July. Build in regular check-ins to reassess priorities, retire outdated material, and bring in new content.
Where SFIA fits in
We mentioned the SFIA framework earlier, and it's worth coming back to because it ties all of this together. SFIA 9 (the current version) includes specific skills for AI and machine learning across all seven responsibility levels, covering everything from AI and data literacy, through machine learning and AI ethics, to job analysis and design.
What makes SFIA particularly useful is that it's not a technology-specific framework. It describes what people need to be able to do, not which vendor's tools they need to know. That means it stays relevant as tools change, and it gives you a way to have consistent conversations about skills across the whole organisation, not just within IT.
SFIA also includes guidance on which tasks and responsibilities might be appropriate to assign to AI, and which should stay with humans. That's an increasingly important question, and having a structured way to think about it beats making it up as you go.
The framework is free to access and is used across government, corporate, and education sectors globally. For New Zealand organisations working in or with the public sector, it aligns well with the government's AI governance expectations.
Getting started
If this all feels like a lot, here's where we'd suggest you begin:
1. Pick one team or business unit and run a skills audit. Keep it simple: a short survey and a few conversations.
2. Identify one or two use cases where AI could save that team real time or effort.
3. Get people using the tools in a supported way, with clear guidelines and someone they can ask questions of.
4. Measure what happens over four to six weeks, and share the results.
5. Use what you learn to plan the next phase.
Here at DataSing, we help New Zealand organisations build AI capability in a way that sticks. That means starting with the business problem, building skills at every level, and making sure the people doing the work are genuinely equipped and confident, not just aware that AI exists.
Written by
DataSing Team
AI Specialists