Skip to main content

The monorepo approach to code management

Codebases are as diverse, unique and interesting as the people who work on them. But almost all of them have this in common: they grow over time (the codebases, not the people). Teams expand, requirements grow, and time, of course, marches on; and so we end up with more developers writing more code to do more things. And while we’ve all experienced the joy of deleting large chunks of code, that rarely offsets the overall expansion of our codebases.

If you’re responsible for your organization’s codebase architecture, then at some point you have to make some emphatic choices about how to manage this growth in a scalable way.  There are two common architectural alternatives to choose from.

One is the “multi-repo” architecture, in which we split the codebase into increasing numbers of small repos, along subteam or project boundaries. The other is the “monorepo,” in which we maintain one large, growing repository containing code for many projects and libraries, with multiple teams collaborating across it.

The multi-repo approach can initially be tempting, because it seems so easy to implement. We just create more repos as we need them! We don’t, at first, appear to need any special tooling, and we can give individual teams more autonomy in how they manage their code.

Unfortunately, in practice the multi-repo architecture often leads to a brittle, inconsistent and change-resistant codebase. This in turn can encourage siloing in the engineering organization itself. In contrast, and perhaps counterintuitively, the monorepo approach is frequently a better, more flexible, more collaborative, long-term scaling solution.

Why is this the case? Consider that the hard problem in codebase architecture involves managing changes in the presence of dependencies, and vice versa. And in a multi-repo architecture, repos consume code from other repos via published, versioned artifacts, which makes change propagation much harder.

Specifically, what happens when we, the owners of repo A, need some changes in a consumed repo B? First we must find the gatekeepers of repo B and convince them to accept and publish the change under a new version. Then, in an ideal world, someone would find all the other consumers of repo B, upgrade them to this new version, and republish them. And now we must find the consumers of those initial consumers, upgrade and republish *them* against the new version, and so on, recursively and ad nauseam. 

But who is the “someone” who will do all this work? And how will they locate all these consumers? After all, dependency metadata lives on the consumer, not the consumed, and there is no easy way to backtrack dependencies. When a problem’s ownership is not immediate and its solution not obvious, it tends to get ignored, and so none of this effort actually happens in practice. 

And that may be fine, at least for a short while, because the other repos are (hopefully!) pinned to the earlier version of the dependency. But this comfort is short-lived, because sooner or later several of these consumers will be integrated into a deployable artifact, and at that point someone will have to pick a single version of the dependency for that artifact. So we end up with a transitive version conflict caused by one team in the past and planted in the codebase like a time bomb, to blow up just as some other team needs to integrate code into production.

If this problem seems familiar, it’s because it’s an in-house version of the infamous “dependency hell” problem that commonly plagues codebases’ external dependencies. In the multi-repo architecture, first-party dependencies are treated, technically, like third-party ones, even though they happen to be written and owned by the same organization. So with a multi-repo architecture we’re basically choosing to take on a massively expanded version of dependency hell.

Contrast all this with a monorepo: all consumers live in the same source tree, so finding them can be as simple as using grep. And since there is no publishing step, and all code shares a single version (represented by the current commit), updating consumers transitively and in lockstep is procedurally straightforward. If we have good test coverage then we have a clear way of knowing when we’ve gotten it right.

Now, of course, “straightforward” is not the same as “easy”: upgrading the repo in lockstep might itself be no small effort. But that’s just the nature of code changes. No codebase architecture can remove the irreducible part of an engineering problem. But a monorepo at least forces us to deal with the necessary difficulty now, without creating unnecessary difficulty later.

The multi-repo architecture’s tendency to externalize dependency hell onto others in the future is a manifestation of a wider problem related to Conway’s Law: “Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure”. A converse of sorts is also true: your organization’s communication structure tends to emulate the architecture around which that communication occurs. In this case, a fragmented codebase architecture can drive balkanization of the engineering organization itself. The codebase design ends up incentivizing gatekeeping and responsibility-shedding over jointly achieving shared goals, because those shared goals are not represented architecturally. A monorepo both supports and gently enforces organizational unity: everyone collaborates on a single codebase, and the lines of communication this imposes are exactly those that our organization needs in order to succeed in building a unified product.

A monorepo is not a panacea. It does require suitable tooling and processes to preserve performance and engineering effectiveness at scale. But with the right architecture and the right tooling you can keep your unified codebase, and your unified organization, humming along at scale.

The post The monorepo approach to code management appeared first on SD Times.



from SD Times https://ift.tt/3AcKtUj

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20