Skip to main content

DevOps feedback loop explained: Noisy feedback

 Our previous story was devoted to delayed feedback. Today let’s look at what noisy feedback means for the speed of digital product delivery.

As you may recall from Part One, Alice joined the company to work on a digital product, with the specific goal to accelerate delivery. The engineering team was relatively small, about 50 engineers, with three cross-functional teams of 6 engineers, shared services for data, infrastructure, and user acceptance testing (UAT). Analysis showed that the largest amount of time spent in the product delivery process  was spent in testing after code development was completed.

Alice learned that the team has an automated regression suite that runs every night (4 hours) and always has about a 25% failure rate for 1,000 tests. Some engineers even tried to fix these issues, but they didn’t have time because of the release deadline and feature development priority, so no one had done anything substantial about it. To keep the ball rolling and continue feature development, it was customary to skip results and move forward. It was easy to close your eyes to the small noise/failed tests especially if you know that the test failure is not a product defect but a test defect. Indeed, it would be great if automated regression had found defects as it was supposed to do. Instead, failed tests signaled environmental issues in which tests are executed. The typical issues were network latency leading to the timeout services, wrong version of the components the product is integrating with, network access issues, wrong libraries on the server to run the application, the database was corrupted data, etc. 

To investigate and discern the root cause of the failed tests’ actual defect from environment misconfiguration or malfunction, the engineering team needed to dedicate a significant amount of time given the accumulated volume. And as you might suspect, most of the environmental issues were under the control of the infrastructure team and the data team. These teams were focused on the production environment being focused on firefighting, keeping a small capacity to support product delivery. As you can imagine, it was hard to find a common language for these three groups since all of them were independently responsible for their piece of value delivery but didn’t recognize the importance of working together on every value increment. 

Such a situation had several adverse consequences: 

  • Trust in automated tests deteriorated: the engineering team didn’t look at automated tests results 
  • Quality degradation since there were actual defects to be addressed, but they were hidden under the noise.
  • The shared team focused on firefighting, most likely because no one addressed environment consistency early in the process
  • Collaboration issues among teams due to capacity constraints

Alice proposes to fix such an issue with fragile and inaccurate quality feedback from nightly regression. She suggested gradually reducing the number of failed tests and blocking further development unless the threshold is achieved. Given the initial start of 25% (250 failed tests) it might be reasonable to set the target of 20% and then, with a 3% increment, go down to 2-3% of allowed failed tests. Therefore, for a specific period, the product team would allocate some % of capacity to address this “quality debt” and refactor tests, fix infrastructure, or address data issues affecting test results. She also proposed for the transition period to dedicate one DevOps and one data person per team for at least a sprint to ensure the teams can challenge the status quo with appropriate domain expertise. As an outcome, she expected to reduce the number of production incidents that distracted all groups.

To justify such a change from a financial point of view, first of all, she needed to calculate how much the production deployment and post-deployment incidents cost to address, and also calculate the average cost of a defect in production. (It might be the revenue loss and/or labor costs to fix the issue). Since her proposal is temporary and the release production issues are continuous, it was easy to quickly confirm, and gain quick benefit. 

Let us take a look at the numbers: 

  • Revenue loss because of defects varied from $100 per minute to $1,000 per minute because of reputational consequences. Last year’s loss was estimated as half the cost of one full-time engineer (FTE).
  • Post-production release stabilization costs typically average one engineering team being focused over a couple of days to fix as well as the infrastructure and database team. The last reporting period had three days,  with six engineers from the product team and two engineers each from infrastructure and database. Total ten engineers for three days. Over the past few releases this has been about 120 full-time engineering days

And required investment 

  • Three teams allocated 10% of their capacity to address these issues, which is about two engineers per release. Given initial coverage of 25% they might need 5-6 releases to stabilize the regression suite. So it is about 12 full-time engineering days.  

As you can see, the cost implications of leaked defects because of the fragile environment were substantially more than the required investment of 120 full-time engineers vs 12 days. Therefore, after discussion with the product manager, she got approval to start fixing the noisy feedback and improve its accuracy and value for the engineering team. 

Alice’s story didn’t end here, she also investigated several other issues known as cascaded feedback and weak feedback. We will unfold these terms in the following stories.

To summarize this story, we would emphasize the importance of a feedback loop frame when you optimize digital product delivery. In addition to the short time to get feedback, feedback accuracy also plays a vital role in ensuring the speed of delivery. 

The post DevOps feedback loop explained: Noisy feedback appeared first on SD Times.



from SD Times https://ift.tt/xiFkKRg

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20