Skip to main content

The surprising truth about re-platforming databases to public cloud

Enterprises are moving full steam to the public cloud unencumbered by what is happening in the economy. If anything, volatility due to Covid has raised the importance of cloud benefits and the prospect of flexibility and scalability has further accelerated this movement. Enterprises no longer view public cloud as merely Infrastructure as a Service (IaaS). Instead, they are looking for a highly integrated enterprise platform.

Data is the heart of the matter. In particular, data warehousing has emerged as the backbone of cloud data strategies. Every CIO must now solve the challenge of re-platforming their workloads from on-prem systems to cloud native ones.

Curiously, instead of new vendors sweeping the floor with their legacy counterparts, a very different dynamic is emerging. For example, Snowflake, who positioned itself as the #1 cloud migrator recently had to admit that it isn’t that easy, after all. 

On the other hand, an incumbent boldly proclaimed that vendor lock-in will keep them in business for a very long time. And their surging stock price suggests that analysts may see it the same way.

What’s going on here? Why is there not more turnover in the database market if it is of such high value? The answer is as simple as disheartening: moving between databases is a cruelly difficult business. Fueled by overly optimistic advertising, many simply underestimate the challenge. 

Database migrations have abysmal track record

The industry has long grappled with the problem of migrating between databases. Not surprisingly, enterprises suffer when vendor lock-in holds them back from tapping new technology and innovation. Talk to any enterprise IT leader, and you learn that a considerable amount of time and money is spent continually on trying to keep up with new technology developments and moving from old to new.

More concretely, any proposal to move workloads on a mid-range data warehouse system of one vendor to another is an eye-watering experience. The typical estimates are upward of 3 years of time with a price tag of at least $20m. And that’s just the opening gambit. Once the migration is underway, things very often spiral out of control: $20m becomes $50m, and 3 years becomes 5 years. Finally, a new CIO just puts an end to it altogether to stop the bleeding.

For each successful migration, there are about 6-10 failed ones. Even the successful ones are not always convincing. Often, a successful migration is little more than a partial offloading where the legacy system continues to run complex workloads that were just too difficult to move. The result is an ever-increasing fragmentation of the IT landscape within the enterprise and with it increasing technical debt.

Application rewrite is the true problem

Fueled by grand statements made by vendors of database migration tools, customers often fall into the trap of thinking that transferring the content from the old to the new system is the problem. It is a critical part, no question. But it’s just a small fraction of the cost.

The lion’s share of the pain—and the cost—comes from rewriting applications. Applications need to be adjusted to make them work with the database. Vendor-specific SQL, tools and utilities have found their way into every crevice of the enterprise. Even inside 3rd party systems, custom SQL was a critical element of accelerating the business in the past. However, what once was a competitive advantage has turned into a liability.

None of this should come as a surprise. So why then the high failure rate? Shouldn’t we, as an industry, know better by now? First, until recently there simply wasn’t an alternative, so we just soldiered on. Second, the problem is extremely treacherous: the first 80% of the migration are often a walk in the park and trick folks into believing they are about done. It’s the last 20% that kill migrations.

The difficulty of the 20% comes from the tight interplay between application and database content. Any compromise during content transfer (lack of data types, lack of support of Stored Procedures, etc.) exponentially increases the difficulty of the application rewrite. Staging a migration to convert content first, and applications independently afterward is a recipe for failure.

Bespoke solutions are non-solutions

Today, the industry implements bespoke solutions. This is fancy-talk for cutting corners. Yet, we’ve been doing this so long that it has become folklore. Ask any IT leader and they will associate database migrations with failed projects that overran their budget and were way behind schedule by the time they got killed off.

However, as database technology becomes more commoditized, there is less and less room for these bespoke solutions. The enterprise that doesn’t have to resort to a bespoke solution, can run faster, and outperform its competitors. The pace of tech adoption then truly becomes a competitive advantage.

Time for a new paradigm

Much effort is being devoted to speed up migrations: automatic code conversion is an area of intense development. However, instead of speeding up something known to be ultimately insufficient, a new paradigm is in order. As Henry Ford said: “If I had asked people what they wanted, they would have said ‘faster horses’”.

Similarly, the database industry needs to break out of the cycle of old approaches that haven’t been delivered. The problem isn’t new: other areas of IT had exactly the same challenge. Practically every one of them has been redefined in the past 20 years by virtualization. From server virtualization to storage and network, the concept of virtualization has eviscerated migration challenges across the board.

With the industry-wide need to migrate database systems to the cloud, virtualization—the disintermediating of applications and database systems—is the logical next step. While still a young discipline, the first products are already on the market with more under active development. The future where applications can move seamlessly between databases has just begun.

The post The surprising truth about re-platforming databases to public cloud appeared first on SD Times.



from SD Times https://ift.tt/3qJiU23

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20