Skip to main content

How the open data lakehouse makes data mesh real—and radically expands data use for business

When Zhamak Dehghani introduced data mesh in 2019, she was acknowledging both the unmet expectations of business leaders and major frustrations of technologists in the data warehousing world. The talk channeled a decades-long groundswell of sentiment in the field, but most importantly, described a better approach for analytical data management. Data mesh surrenders to data’s naturally distributed state, breaking down the monolithic thinking that has hung on in the data world—even as the advent of cloud and microservices has transformed application development.

Data warehousing dream has become a nightmare

The dream that Teradata spun up more than 40 years ago with its purpose-built data warehouse turned into a nightmare over the years: Data became subject to centralized, often proprietary management and vendor lock-in. Pipelines and technical implementations took center stage over business concerns. Siloed data engineering teams bore the brunt of moving and copying data, transforming it, and delivering useful datasets across every nook of an enterprise. Those engineers have often been swamped with impossible backlogs of data requests, while business units have futilely waited for data quickly growing stale. Even though data management tools have improved rapidly in the last five to ten years, many of these same problems have been imported to the cloud. 

And the crux of the matter? Businesses, in truth, have only used a small fraction of their vast, centralized stores of data to produce new products and offer customers value—because existing systems don’t let them operate on all of their data. 

Now, the data mesh concept advocates a decentralized architecture, where data is owned and treated as products by domain teams that most intimately know the data—those creating, consuming and resharing it. That spurs more widespread use of data. With a data mesh, complexity is abstracted away into a self-serve, easy-to-use infrastructure layer, supported by a platform offering both freedom and federated governance. 

But how is this concept of a business-first, interoperable distributed system for data actually made real? 

The open data lakehouse answers data mesh’s call

An important achievement of the open data lakehouse is that it can be used as the technical foundation for data mesh. Data mesh aims to enable domains (often manifesting as business units in an enterprise) to use best-of-breed technologies to support their use cases. So the lakehouse, which allows domains to use all of their preferred tools directly on data as it lives in object storage, is a natural fit. For example, domains can use an engine like Spark to transform data, then a purpose-built tool to run interactive dashboards on that same data once it’s ready for consumption. The lakehouse’s inherent no-copy nature easily answers objections that have been leveled against some implementations of data mesh, which unfortunately resulted in a proliferation of data pipelines and copies.

This flexibility remains the same as the organization evolves. Because data in an open lakehouse is stored in open formats on object storage, when a new engine emerges, it’s easy for domains to evaluate and use that new engine directly on their lakehouse data. Open table formats like Apache Iceberg offer the flexibility to use any engine, while ensuring there’s no vendor lock-in.

Aside from providing openness and flexibility, lakehouses eliminate the need for data teams to build and maintain convoluted pipelines into data warehouses, as they provide data warehouse functionality and performance directly on object storage.

When looking to implement the technical platform for data mesh, in addition to the fundamental attributes mentioned above that a lakehouse delivers, companies should look for a platform that enables self-service for data consumers. This is a business-first approach. Different platforms enable this at different levels of the architecture. For example, companies can provide a self-service UI for domain users to explore, curate and share datasets in their semantic layer, and create dedicated compute resources for each domain, to ensure workloads are never bottlenecked by other domains’ workloads. 

And, while not every data lakehouse can connect to external sources across clouds and on-premises, the best implementations do, enabling data consumers to analyze and combine datasets regardless of location. For data mesh, it’s also advantageous for business units to be able to easily manage these data products like code for streamlined testing and improved workflows and to meet stringent availability, quality, and freshness SLAs for data products. 

Freeing IT from bottlenecks, empowering governance

When business units have a self-service experience at their fingertips to create, manage, document, and share data products, and discover and consume other domains’ data products, IT can step back and focus on delivering a reliable and performant self-service platform to support analytics workloads in the company. That data mesh-enabling platform makes implementation details like pipelines secondary to business needs. With the lakehouse, IT zeros in on establishing common taxonomy, naming conventions and SLAs for data products, applying fine-grained global access policies, and deploying the best compute engines for each domain directly on object storage without worrying about rogue data copying.

Implementing data mesh may not be necessary for every company. But if an enterprise has a large number of business units that benefit from sharing and combining each other’s data, and is currently bottlenecked by engineering when trying to share data or build their own datasets due to lack of self-service capabilities, the data mesh approach is probably suitable.

Engaging with data, analyzing it and crafting data products should not only delight the user and foremost serve business goals, but it should empower cross-functional teams and open up a company’s volumes of data, often growing dusty in object stores, for vigorous use.

Dehghani opined that the paradigm shift is from ingesting, extracting and loading data, pushing it to and fro via centralized pipelines and monolithic data lakes to a distributed architecture that serves data, makes it discoverable and consumable, publishes output with data ports and supports a true ecosystem of data products. That is what the open data lakehouse makes concrete, putting concept into practice.

The post How the open data lakehouse makes data mesh real—and radically expands data use for business appeared first on SD Times.



from SD Times https://ift.tt/L60ZEcN

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20