Skip to main content

premium Guest View: The de-evolution of software testing

Software testing is nearing the end of its Cretaceous period.  Personally, I invite the proverbial asteroid to advance its destructive approach so the practice of software testing can continue down its much-needed evolutionary journey. Don’t get me wrong, software testing has not been totally stagnant; it did evolve during its Cretaceous period.  The most significant shift was at the top of the testing food chain, as developers evolved to accept more responsibility for software quality. This distribution of the onus of quality is a critical stepping stone for the industry’s next evolutionary leap.  

The evolution of software testing has been – in comparison to other technologies – slow.  If you agree that software testing as a practice has been sluggish, then we need to take a step back and ask: “Why are we in this situation?” This article will explore the two main reasons why I believe software testing has not evolved as fast as it should and in an additional article, I will offer my hope for software testing natural selection.

Two main reasons software testing has not evolved
I believe that there are two main reasons why software testing has not evolved: organizations are handcuffed by the global system integrators (GSIs) and testing has had a messed-up organizational structure.  

Between the two, which is the chicken and which is the egg?  If software quality had a stronger reporting hierarchy could the GSIs exert so much control?  Did the GSIs abuse their position and successfully mute the internal opposition? I have my guesses but I would love to hear your opinion.

Handcuffed by the GSIs
Let’s start this discussion with the GSIs because the topic is significantly more incendiary.  The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex.  Known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing.  There were three main detrimental impacts to software testing with the shift to the GSIs: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain.  

Given the comparatively lower cost of labor (an average of 2.5 to 1), the GSI model primarily structured and executed tasks manually.  The GSIs painted a picture of an endless supply of technical resources clamoring to work 24/7 compared to complacent domestic resources.  It conjured images of the secretarial pool (without iPhones) hammering away at test plans at 60% of the current spend.  With an abundance of human capital there is really no impetus to promote automation.  As for the domestic operation, costs were contained for the time being as software testing was demoted from being a strategic task.

It’s obvious, but needs to be highlighted, that the GSI model that favored the manual execution of tasks also sidelined automation efforts.  Why?  In the GSI model, automation potentially eliminates headcount and reduces testing cycle times.  Less headcount plus reduced cycle times equates to fewer billable hours and reduced revenue in a time and materials model.  Therefore, the benefits of automation certainly would not serve the financial goals of the GSI.  Furthermore, if automation was suggested to your service provider, then the GSI suggested that they build it for you. All GSIs today sit on millions of lines of dead code that represent the efforts to build one-off automation projects.  This dead code also represents millions of dollars in billable hours.  

Perhaps the greatest impact to the evolution of software testing was the business and process brain drain.  With lower OpEx as the bait, the global software testing services market swelled to $32 billion dollars annually (that is “B-Billion”).  This tectonic shift drained resources that had deep business process knowledge from the domestic organization.  The net of this brain-drain was less impactful outcomes from the activity of testing.  What’s my evidence?

  • Severely swollen test suites 
  • No concepts of risk or priority in test suites
  • Metrics driven by count of tests
  • False positive rates >80%
  • Abandoned test suites because the code is too far out of sync with the tests
  • There’s more but this is getting too depressing…

Let me be very open about my opinion on this matter.  Organizations traded process control for lower costs.  In the post Y2K world this seemed like a pretty good idea since software primarily served an operational purpose.  Today software is the primary interface to the business and any facet of its delivery should be considered a core competency.    

Testing has had a messed-up organizational structure
Testing has historically reported into the development team and this was a BIG mistake.  Testing should have always reported to operations.  I cannot think of a single reason why testing should not report to operations.  In fact, if testing did report to operations then I believe the practice of testing software would be in a significantly different evolutionary state.  Let’s play this concept out a bit.  What if the practice of software testing landed with operations instead of development? I think we would have seen three primary outcomes: more rapid adoption of development testing practices, advanced end-to-end test automation, and a focus on business risk.

If the software testing team historically reported to operations then there would have been (even) more tension between Dev and Ops. This tension would have promoted the need for more rigorous testing in development by developers.  The modern form of software testing (and the tension between developers and testers) evolved from the lack of diligent testing by developers. Practices such as static analysis, structural analysis, early performance testing and unit testing matured slowly over the past decade.  The evolution of these practices often created tension as organizations layered in quality and security governance programs.  

If the software testing team reported to operations, software testing would have been one of the frontline tasks in ITIL processes, versus a more diminutive validation task.  Speed would have come to light earlier as a business objective, therefore promoting the adoption of advanced automation techniques.  I realize that my statement above is loaded with some solid conjecture but it contains some of the core drivers of DevOps — so please feel free to comment.  With speed to production being a more prominent objective, there would be better access to production data, better access to environment data and a more cohesive approach to the application life cycle and not just the software development life cycle.  Automation would have become an imperative and not an alternative to outsourcing.  

With software testing reporting to operations, I believe the KPIs and metrics driving the activity would have been different.  Metrics like count of tests and percentage of tests executed would have never leaked onto dashboards.  I believe we would have evolved metrics more closely aligned to business risk and would have evolved models that allow the organization to more reliably assess the risks associated with releasing software at any point in the development cycle.  

Now I’m depressed, yet energized
We are in a pretty unique time in the evolution of software testing.  We are facing new challenges associated with working from home.  We face unprecedented pressure from digital transformation initiatives.  Speed is the new mantra for software testing yet the penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media.  Now is the time to re-think the whole process.  I will share some of those ideas in my next article on software testing natural selection.  

The post <span class="sdt-premium">premium</span> Guest View: The de-evolution of software testing appeared first on SD Times.



from SD Times https://ift.tt/3iKiiCe

Comments

Popular posts from this blog

Difference between Web Designer and Web Developer Neeraj Mishra The Crazy Programmer

Have you ever wondered about the distinctions between web developers’ and web designers’ duties and obligations? You’re not alone! Many people have trouble distinguishing between these two. Although they collaborate to publish new websites on the internet, web developers and web designers play very different roles. To put these job possibilities into perspective, consider the construction of a house. To create a vision for the house, including the visual components, the space planning and layout, the materials, and the overall appearance and sense of the space, you need an architect. That said, to translate an idea into a building, you need construction professionals to take those architectural drawings and put them into practice. Image Source In a similar vein, web development and design work together to create websites. Let’s examine the major responsibilities and distinctions between web developers and web designers. Let’s get going, shall we? What Does a Web Designer Do?

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are declaring 20