In the last few years, the CI/CD pipeline has undergone an evolution. As more development processes are shifted left, and additional tasks get pushed into the pipeline, the limits of how much it can handle have been tested.
With the need to continuously integrate that comes along with modern application development, the pipeline has had to expand in order to account for tasks like low-code development, security, and testing while teams are still trying to prioritize the acceleration of releases.
How it was vs. how it is
“Early CI/CD was really about how you build and package an application, and then the CD portion came in and it became how you get this application out to a place,” said Cody De Arkland, director of developer relations at continuous delivery platform provider LaunchDarkly. “But now in the modern world you have all of these declarative platforms like Kubernetes and other cloud native things where we’re not just dropping a set of files onto a server anymore, we’re going through and building this self-contained application stack.”
He explained that although the addition of declarative platforms and the repeatable process offered by the cloud has, overall, made CI/CD more simple, teams have also had to manage added complexities because developers now must be sure that the application or feature they have built also has all of the necessary aspects for it to run.
To account for the potential for heightened complications, De Arkland said that CI/CD tools have greatly matured, particularly in the past four years.
“A lot of these concepts have become much more first class… As the space has evolved and UX has become more important and people have become more comfortable with these concepts… a lot of the sharp edges are being rounded out and CI/CD tooling has gotten to a place where so much of this is so much easier to implement,” he said.
According to Andrew Davis, senior director of methodology at the DevOps platform company Copado, another one of the ways that CI/CD practices have evolved is in the way that developers are spending their time.
He explained that one of the key demands of modern development is for teams to respond to the need for bug fixes or incremental updates incredibly quickly so that end users experience minimal negative effects.
“There’s an expectation to use the developer’s time in the most efficient way possible, so continuous integration puts a lot of energy into making sure that developers are all staying in sync with each other,” Davis said.
He went on to say that with the increased prevalence of CI/CD, there has been a spike in the need for developers to hone specialized skills and techniques to address the entirety of modern application development needs.
These skills include things like new options for building infrastructure in the cloud and managing it in the CI/CD pipeline, and managing the development process for low-code applications and SaaS platforms.
Cloud native CI/CD
Despite the need to master new skills, De Arkland said that the move to cloud native has made organizations’ ability to adopt newer CI/CD processes much simpler due to the repeatable nature of the cloud.
He said that with the cloud, templated configurations are usually default, and when you can apply these configurations through a template, it becomes an artifact that exists next to the application code, making it much easier to replicate.
“It’s less about cloud itself making it easier – and more that when you do it in cloud, you get to lean on the same ‘declarative’ approaches that many other platforms align with… CTOs and CIOs are a great example, they understand the ground floor concepts of the container, but they don’t understand the deeper underpinnings,” he said. “When you have predictability, that makes enterprises a little bit less scared to adopt these things.”
He explained that while cloud native CI/CD processes still require the implementation of certain crucial checks, the removal of the unknown variables equips organizations with a new sense of confidence in their processes and, therefore, the product they are delivering to end users.
However, despite the numerous benefits, cloud native CI/CD also comes with heightened risks, according to David DeSanto, chief product officer at GitLab. This is because organizations may move into the cloud without realizing that the public nature of the cloud could expose their intellectual property or their artifacts. He cited an example of this happening a few years ago, when a security company was inadvertently releasing early versions of its products because they didn’t realize that the package was public on the internet.
Stretching the pipeline
Furthermore, CI/CD processes have had to mature in order to accommodate the needs of shifting left, which has put some strain on the pipeline.
DeSanto explained that as more advanced capabilities have been added into the pipeline, not only has the pipeline itself had to evolve, but the capabilities too.
“If you take a traditional application security scanner and you put it in a CI/CD pipeline, it could make the pipeline take hours, if not days or a week to complete,” DeSanto said. “And obviously, if your goal is to reduce time to market, you can’t have your pipeline taking longer than you have to push out whatever change you’re looking to do.”
He expanded on this, saying that security and testing companies looking to be accepted into the CI/CD space have had to reevaluate their tooling so that these features can be introduced into the pipeline without irreparably impacting efficiency.
Copado’s Davis went on to say that although testing has always been a part of the pipeline in one way or another, now developers are being tasked with examining their tests and determining where in the process certain tests should be run in order to maintain quality and efficiency.
“The expectation is that you have a full battery of tests, so that means that you have to begin to triage your tests in terms of which can run quickly and up front versus which are the more comprehensive tests [to run later],” said Davis.
To make this choice, Davis explained that developers must assess different aspects of the tests. The first being the risk associated with each test. He said that areas that directly impact revenue or cause the most damage to end users are where the priority should be placed.
Next, he said that the order of tests should be determined based on the relevance to the area of the application that is being changed.
“And the way that would work is if the developer is making a change in a particular aspect of the code base, you can identify which tests are relevant to that and which ones are fast to run,” Davis said. “Then you run…the tests that are most likely to detect an error in the development and the ones that run quickly, immediately to get very fast feedback and then changes can be made immediately.”
He also went on to explain that he believes the shifting left of security processes and the security controls that have been embedded into the pipeline as a result are both wholly positive changes.
LaunchDarkly’s De Arkland also touched on this, saying that in the past, security had been viewed as something adjacent to the pipeline rather than something that is inherent to it.
He explained that as the concept of DevSecOps has become a more first-class conversation, the CI/CD space has become cognizant of these concepts as well.
De Arkland said that the conversation around which stage of the pipeline should interface with security tooling and how organizations can update communication rules to take the way a container or platform is operating into account have been major talking points around the integration of security into the pipeline.
“Whereas CI/CD used to be just about building software and dropping it on a place, it is really now becoming all of these adjacent tasks that have also lived alongside of it,” he said.
Platform engineering is helpful, but not the death of DevOps
Cody De Arkland, director of developer relations at LaunchDarkly, also spoke about platform engineering, and how its emergence has changed CI/CD processes.
He explained that, particularly in terms of the different interaction points between systems, platform engineering teams can help when it comes to applications that span several different areas inside of an organization.
“As we have applications spanning things like security and run time and build time and doing software releasing as opposed to just CI/CD builds, you need to be able to respond to that across all of these domains,” he said. “I think platform engineers are really the ones who are going to help stitch that all together… and really understand the context of managing all those things across.”
David DeSanto, chief product officer at GitLab, added that platform engineering plays an enormous role in an organization’s approach to a multi-cloud or multi-platform strategy because it allows for the creation of a unified platform that is agnostic to the cloud platform.
He explained that this gives organizations flexibility, transparency, and the ability to follow regulatory compliances more easily.
“There is a lot of movement in Fintech and financial regulations that they cannot be single cloud, and without a good platform engineering strategy that could mean that you’re building two completely separate CI/CD pipelines,” DeSanto said.
Andrew Davis, senior director of methodology at Copado did, however, stress that the claim that DevOps has died and platform engineering is its successor is a bit of an overstatement.
He said that platform engineering can make it simpler for organizations to adopt CI/CD processes and spin up pipelines that include whatever quality and compliance controls are necessary, but its purpose is not to replace DevOps as a whole.
“I would tend to think of CI/CD as one of the critical capabilities offered by development platforms and platform engineering,” Davis said. “So the platform engineering team makes sure that if a new team is spinning up, they can easily create their own CI/CD pipeline, and they can automate the process of plugging into a company’s security controls.”
He said that by treating these different development tools as products that the company is investing in, it has the potential to reduce the burden placed on the individual developer to figure these things out for themselves.
Speeding up delivery
Davis also said that while they can result in an initial slowing down of processes as team members get the hang of things, including well done security controls in the CI/CD pipeline allows developers to get feedback on code more quickly, therefore, accelerating the remediation of issues.
Even with this, though, the addition of all of these extra tasks may lead to organizations struggling to accelerate the delivery of their products due to unforeseen bottlenecks arising in the pipeline.
Davis said that the tension that exists between the desire to deliver more quickly and the need to be thorough with all of the necessary security checks and tests has become increasingly more prevalent as the pipeline has matured.
“It is effectively impossible to prevent all risks, and so you need to understand that each of those compliance controls are there to reduce risk, but they come at a cost,” he explained. “You have to balance that goal of risk reduction with the cost of speed, and as a result, the cost to innovation.”
The most secure option is oftentimes not the one that can deliver the most speed, and so striking that balance where both sides can be satisfied is key to a successful CI/CD pipeline.
DeSanto Then explained that organizations need to be approaching CI/CD in a way that prioritizes balancing the overall risk against the reward. This means that companies need to be able to determine if it is too risky to run a certain test or scan on the feature branch or the developer’s branch, and if it is, these should only be run as the changes are merged in.
He continued, saying that finding the right tools makes a world of difference when it comes to pipeline evolution. “You may have a security scanner or a load testing tool or a unit testing tool that maybe is not meant for the way you’re now operating, and it could be as simple as swapping out that tool,” DeSanto said.
De Arkland also believes that as artificial intelligence technology continues to advance, more organizations may start turning to AI tools to find this balance, and make it sustainable. He said that while it is not fully here today, he can see a future where someone tells a system the desired steps to execute and the AI delivers an asset that represents that pipeline.
“A good example of this is building APIs using OpenAI’s AI engine. You don’t write the API calls, you just give it the intentions,” De Arkland explained. “Then, it gives you back a spec that you would implement in your application… so I think we’re close to a time when pipelines are treated the same way.”
This isn’t to say that AI would be replacing the need for human developers in this process; rather, it could work in conjunction with them to work towards optimal delivery time.
DeSanto also said that with generative AI becoming more commonplace, some organizations have already found a place for it in their pipelines. He noted that AI is already being used to automate the process of getting a pipeline configuration created, identifying where configuration mistakes may lie, and analyzing logs to seek out certain patterns.
He also stated that AI has great potential to change the DevSecOps space, as it can be applied to observability tools and make it so organizations can sniff out an issue much earlier in their processes.
The post Are CI/CD pipelines bursting at the seams? appeared first on SD Times.
from SD Times https://ift.tt/Ldf3BXc
Comments
Post a Comment