A new data architecture that aims to challenge the preconceived notions of data and enable organizations to scale and move faster was introduced at this month’s Starburst Datanova conference.
“The inconvenient truth is that despite increased investment in AI and data, the results have not been that great,” Zhamak Dehghani, director of emerging technology for Thoughtworks North America, said at the conference. Organizations are failing to bootstrap, failing to scale sources, failing to scale consumers, and failing to materialize data-driven value, she explained.
Dehghani introduced the data mesh, which is “an intentionally designed distributed data architecture, under centralized governance and standardization for interoperability, enabled by a shared and harmonized self-serve data infrastructure.” The objective is to “create a foundation for getting value from analytical data and historical facts at scale, she said — with scale being applied to:
- constant change of data landscape;
- proliferation of both sources of data and consumers;
- diversity of transformation and processing that use cases require;
- speed of response to change
The need for the data mesh grew out of the great data divide between operational data and analytical data. Operational data, also known as “data on the inside,” runs the business and serves the user while analytical data, or “data on the outside,” optimizes the business and improves user experience, she explained.
“The way we have divided our organization and technology around these two separate [data] planes and the way we are integrating them through this ETL pipeline is the source of trouble just to start with,” she said. These data pipelines are very fragile and keeping them happy and healthy is very challenging.
Data mesh tries to introduce a new integration model that respects the differences between the two data planes, the technology, and how people access the data, Dehghani explained.
But before you can understand the data mesh, you need to understand the evolution of data solutions, according to Dehghani.
Generation 1: Data warehousing, where you grab data, extract it and put it in a model for data analysts to access. “This has worked pretty well for use cases we had half a century ago but today we really need more,” said Dehghani.
Generation 2: Data lakes, where solutions leveraged machine learning and removed the bottleneck of needing a specialized team to understand the data. “The challenge with data lake that we have seen is that now we’ve swung from this one canonical model to maybe not so much the modeling and we’ve ended up with data swamps — data that we are not clear who really owns them,” Dehghani explained.
To deal with challenges like data swamps, the answer has been the third generation, what is seen today: multimodal data architect on the cloud where it takes the best parts of data lakes and best parts of data warehousing and puts it on the cloud, she said.
“We have been busy innovating and building technologies, so then why the failure modes we are seeing at scale?” Dehghani asked. “We need to challenge certain assumptions…and see what we can change.”
The data assumptions data mesh challenges are:
- Data management solution architecture is monolithic: At its core, your enterprise architecture expects to assume data from a number of sources and provide data to a set of diverse use cases. While monolithic architectures are great to get started with because they are simple and usually only have one backlog, one solution, one vendor, one team, they become a pain when you try to scale, according to Dehghani.
- Data must be centralized to be useful: “When you centralize data for it to be useful, then you centralize the people around it, centralize the technology and you lose the ownership and the meaning of the data from the source,” said Dehghani.
- Scale architecture with top-level technical partitioning: Here you either have a domain-oriented architecture, or you break it down around technical tasks and functions. According to Dehghani, this technical decomposition causes more friction because the change does not localize to a technical function. The change, features, value, outcomes are orthogonal to these technical phases.
- Architecture decomposition orthogonal to change: This brings organizations back to square one where they are slow to change, slow to respond to, and slow to scale.
- Activity-oriented team decomposition: Data engineers, data platform teams, and BI teams have been isolated from the domains, and in charge of building the pipeline and responding to change. This is challenging because on the left-hand side people running the business on the database have no incentive to provide meaningful, trustworthy, or quality data, and on the right-hand side customers are looking for new data and they are impatient.
The data mesh challenges these assumptions that have been accepted for years, and looks to see how else the architecture and ownership can be divided, and what the role of the platform and domain are, and then builds the technology to support it, according to Dehghani.
The four principles of the datamesh are are:
- Domain-oriented decentralized data ownership and architecture
- Data as a product
- Self-serve data infrastructure as a platform
- Federated computational governance
“The needs are real and tools are ready. It is up to the engineers and leaders in organizations to realize that the existing paradigm of big data and one true big data platform or data lake, is only going to repeat the failures of the past, just using new cloud based tools,” Dehghani explained. “This paradigm shift requires a new set of governing principles accompanied with a new language.”
The post Data mesh challenges common data assumptions appeared first on SD Times.
from SD Times https://ift.tt/3bF5hI2
Comments
Post a Comment