top of page

Overview

Our client is a government organization with a team of six data analysts. Their job is to manage and create data-driven reports and tools, like dashboards and models, on a daily, weekly, and monthly basis. These reports are used by people like policy analysts, subject matter experts, service designers, and frontline managers.

It’s important that these reports are accurate, delivered on time, and easy to use

The organization's data comes from six different sources, some as raw files and others in databases. To make their reports, they had to extract and transform the raw data. There were 400 different data elements to handle, but their data engineering team was small, with most of the expertise focused on data management and not on building pipelines.

 

Only one person knew how to create efficient data pipelines using Azure Data Factory (ADF), which meant the process could take months or even a year. Meanwhile, their wider consumer team would have to rely on outdated data to make decisions.

DataSing's flagship service, TÅ«i, combines various Azure tools into one easy-to-use package. It allowed our client’s team to speed up their data processes. Using TÅ«i’s user interface, the data analysts—without needing advanced technical skills—could act as ETL (Extract, Transform, Load) experts and clear the backlog.

 

We built one flexible, configurable data pipeline that could handle all 400 data elements, no matter their source. There was no coding involved; the client just had to provide the location of the data and database connection details. Plus, since most of the data processing was automated, errors were greatly reduced.

This project would have usually required a business case, funding approval, assembling an internal and/or external team and many discussions, taking up to 12 months to complete.

 

Without a solution, the users would have had to create data insights from scratch, increasing the chances of mistakes and delays. Thanks to DataSing’s TÅ«i user interface, configuration driven pipelines and native catalogue, the whole process was finished in just under 8 weeks.

 

It also empowered the team for future work—no specialised skills are needed. Any new data sources they add will instantly benefit from the existing setup we created.​​​

bottom of page