Building a modern data platform is a transformative endeavour, particularly for organisations aiming to unlock the value of their data. While IT teams often focus on building a robust, scalable infrastructure, the real KPI for a successful data platform lies in its adoption by business users. Business teams, who typically sponsor these projects, prioritise seeing quick and measurable returns on investment (ROI) from their data platform, making user adoption a critical success factor. For this to happen, the platform must support both well-defined, familiar use cases and exploratory projects that help uncover new insights.
In the early Proof of Concept (POC) phase, business teams often operate in what can be termed the “known-known” stage. They understand the specific data product they want to create and have clarity on the data sources required for this purpose. Developing data products with this level of clarity is generally straightforward. Because the required data sources are known, data engineers can quickly build pipelines, address data quality issues, and test the product. Once the business team validates the product, it can be easily moved to production, often using agile methods and CI/CD processes that streamline deployment.
The Agile methodology, widely used in software and web development, has demonstrated its value in accelerating development cycles and enhancing product quality through iterative improvements. DataOps teams frequently try to replicate these agile principles, using them to build data products quickly when the requirements and data sets are clearly defined. For these well-understood use cases, agile development allows teams to swiftly create, test, and move data pipelines from development to production environments, giving business users faster access to valuable data insights.
However, real-world use cases often extend beyond the known-known stage. These projects tend to be more exploratory and complex, falling into an “unknown-unknown” category. Here, business users or data scientists may not know what data products they need at the outset. Instead, they require a platform where they can explore and discover data, experimenting with different data sets to surface new insights or identify patterns. For these exploratory projects, the data platform must provide access to clean, up-to-date, and well-organized data that users can readily interact with to fuel innovation and uncover hidden insights.
Ensuring that the platform supports exploration requires a data engineering-heavy approach. The data engineering team must design processes that automate data preparation and leverage machine learning to handle large volumes of data and complex data transformations. Automated data preparation enables the platform to consistently ingest, clean, and organise data, making it accessible and ready for analysis. This level of automation is essential for ensuring that the platform provides a seamless experience, allowing business users to focus on discovery without the distractions of data wrangling or quality issues.
The adoption of machine learning in data preparation also enhances the platform’s ability to support unknown-unknown projects. Machine learning models can assist in identifying patterns, anomalies, and relationships within the data, helping business users derive meaningful insights faster. Additionally, these models can automate tasks such as data classification, entity matching, and anomaly detection, which would otherwise be labour-intensive and time-consuming.
A successful modern data platform must be designed with both structured and exploratory use cases in mind. By combining agile development practices for known data products with automated data preparation and machine learning for exploratory projects, organisations can maximise their platform’s value. This approach not only accelerates ROI but also promotes widespread adoption, transforming the data platform from a simple IT infrastructure into a powerful tool for business innovation.