Skip to main content

Data mesh challenges common data assumptions

A new data architecture that aims to challenge the preconceived notions of data and enable organizations to scale and move faster was introduced at this month’s Starburst Datanova conference.

“The inconvenient truth is that despite increased investment in AI and data, the results have not been that great,” Zhamak Dehghani, director of emerging technology for Thoughtworks North America, said at the conference. Organizations are failing to bootstrap, failing to scale sources, failing to scale consumers, and failing to materialize data-driven value, she explained. 

Dehghani introduced the data mesh, which is “an intentionally designed distributed data architecture, under centralized governance and standardization for interoperability, enabled by a shared and harmonized self-serve data infrastructure.” The objective is to “create a foundation for getting value from analytical data and historical facts at scale, she said — with scale being applied to: 

  • constant change of data landscape; 
  • proliferation of both sources of data and consumers; 
  • diversity of transformation and processing that use cases require; 
  • speed of response to change

The need for the data mesh grew out of the great data divide between operational data and analytical data. Operational data, also known as “data on the inside,” runs the business and serves the user while analytical data, or “data on the outside,” optimizes the business and improves user experience, she explained. 

“The way we have divided our organization and technology around these two separate [data] planes and the way we are integrating them through this ETL pipeline is the source of trouble just to start with,” she said. These data pipelines are very fragile and keeping them happy and healthy is very challenging.

Data mesh tries to introduce a new integration model that respects the differences between the two data planes, the technology, and how people access the data, Dehghani explained. 

But before you can understand the data mesh, you need to understand the evolution of data solutions, according to Dehghani.

Generation 1: Data warehousing, where you grab data, extract it and put it in a model for data analysts to access. “This has worked pretty well for use cases we had half a century ago but today we really need more,” said Dehghani. 

Generation 2: Data lakes, where solutions leveraged machine learning and removed the bottleneck of needing a specialized team to understand the data. “The challenge with data lake that we have seen is that now we’ve swung from this one canonical model to maybe not so much the modeling and we’ve ended up with data swamps — data that we are not clear who really owns them,” Dehghani explained.

To deal with challenges like data swamps, the answer has been the third generation, what is seen today: multimodal data architect on the cloud where it takes the best parts of data lakes and best parts of data warehousing and puts it on the cloud, she said. 

“We have been busy innovating and building technologies, so then why the failure modes we are seeing at scale?” Dehghani asked. “We need to challenge certain assumptions…and see what we can change.”

The data assumptions data mesh challenges are: 

  1. Data management solution architecture is monolithic: At its core, your enterprise architecture expects to assume data from a number of sources and provide data to a set of diverse use cases. While monolithic architectures are great to get started with because they are simple and usually only have one backlog, one solution, one vendor, one team, they become a pain when you try to scale, according to Dehghani.
  2. Data must be centralized to be useful: “When you centralize data for it to be useful, then you centralize the people around it, centralize the technology and you lose the ownership and the meaning of the data from the source,” said Dehghani.
  3. Scale architecture with top-level technical partitioning: Here you either have a domain-oriented architecture, or you break it down around technical tasks and functions. According to Dehghani, this technical decomposition causes more friction because the change does not localize to a technical function. The change, features, value, outcomes are orthogonal to these technical phases. 
  4. Architecture decomposition orthogonal to change: This brings organizations back to square one where they are slow to change, slow to respond to, and slow to scale.
  5. Activity-oriented team decomposition: Data engineers, data platform teams, and BI teams have been isolated from the domains, and in charge of building the pipeline and responding to change. This is challenging because on the left-hand side people running the business on the database have no incentive to provide meaningful, trustworthy, or quality data, and on the right-hand side customers are looking for new data and they are impatient.

The data mesh challenges these assumptions that have been accepted for years, and looks to see how else the architecture and ownership can be divided, and what the role of the platform and domain are, and then builds the technology to support it, according to Dehghani.  

The four principles of the datamesh are  are:

  1. Domain-oriented decentralized data ownership and architecture
  2. Data as a product
  3. Self-serve data infrastructure as a platform
  4. Federated computational governance

“The needs are real and tools are ready. It is up to the engineers and leaders in organizations to realize that the existing paradigm of big data and one true big data platform or data lake, is only going to repeat the failures of the past, just using new cloud based tools,” Dehghani explained. “This paradigm shift requires a new set of governing principles accompanied with a new language.” 

The post Data mesh challenges common data assumptions appeared first on SD Times.



from SD Times https://ift.tt/3bF5hI2

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20