Almost any company writing software today understands and glorifies the concept of Minimum Viable Product. Creating something that is just good enough for customers to successfully use it is enshrined as the most parsimonious path to profits. MVP has over time taken on additional freight as a general term connoting faster time-to-market for features or other sub-elements of products. The move from monoliths to microservices and the Cambrian Explosion of APIs, too, has radically increased the number of so-called “products” in the world today.
The dark side of MVP is that fastest to market too often means security is a second or even third-order consideration. That is logical. Developers are not promoted or rewarded right now because the products they ship are more secure. When a hit product or feature gets to market quickly but a critical vulnerability results in a data breach and millions in losses 18 months later, companies rarely focus on the original MVP development process. Not surprisingly, CISOs now have a cutting alternative meaning for the acronym. MVP frequently means “Most Vulnerable Product”, the one that didn’t get the same level of scrutiny and poses an outsized risk and headache to SecOps, DevSecOps and AppSecurity teams.
Secure As You Ramp
I come from the hardware business. At Intel, before we could ship new chips at scale, we had to go through an extensive ramp process. In software, the ramp process for MVPs is primarily focused on testing code under load and for performance rather than doing detailed security reviews. That needs to change and it needs to change in a way that makes developers less reluctant to spend time on checking code security. At present, security dramatically slows down shipping new products and features, in MVPs and otherwise. Who can blame developers if for MVPs they prioritize security last?
Adding Accountability to MVPs
Likewise, with hardware, when a product ships with a significant flaw, the team that ships it is on the hook for some time to come. Yet in software, there are few mechanisms to create metrics for code security over time. That might start to change with the arrival of easier code signing using systems like Sigstore. Similarly, the Federal mandate of Software Bill of Materials on every application is putting in place code traceability and accountability that lends to tracking metrics over time.
Ideally, senior engineers or the VP or director or team leader should be able to look back at MVPs they have shipped and tally up an easy scorecard of security flaws over time. To be fair, some flaws are new discoveries that the developers could never have known about. That said, the OWASP Top 10 have remained the same for nearly a decade. The ways to sanitize code and prevent exploits or attacks based on the Top 10 do not depend as much on the latest version of code as much as ensuring sound software design around least privilege and other related principles.
Change the Tools Before You Change the Rules
Just dropping MVP bombs on developers would be unfair and unproductive. No engineer is happy about shipping insecure software. Rather, you need to fix the root cause of MVP insecurity by making it easier for them to identify security flaws and prioritize fixes (either through code modifications or data path sanitizations). This ultimately means changing the underlying tooling and process. You need to shift security left, both in responsibility and in where it is applied in the development process. To change the process you need to change the tools in the following ways.
- Make software code scans faster. Many legacy tools can take hours or days to scan applications and identify likely security risks or outdated libraries. Devs on an MVP timeline can’t wait that long. If you can cut the time for a scan down to minutes, even for hard to scan compiled languages, then the opportunity cost for developers goes down and usage goes up.
- Add precision and prioritization to code fix lists. Most code scanning solutions today effectively ask developers to boil the ocean, throwing a massive stack of fixes and library updates. Then ensues the discussion between developers and AppSec teams about which of the requested fixes are the most important and which represent real risks.
- Teach developers fundamental code security. This falls in the culture category but its critical. A significant chunk of making applications less “exploitable” is using simple best practices. These might include putting rate limits on APIs, preventing functions from accessing external IPs or networks, and implementing input validation and sanitization for fields. AppSec teams that take the time to work with developers to ensure that they understand and internalize and checklist basic code hygiene will do better improving MVP security without impacting code velocity.
Conclusion: Achieving MVP Security Mastery
Think of software development as a production process. You want to maximize yield by producing more and faster. However, you must minimize flaws or else your products will incur costs and liabilities down the road. Once an AppSec team and developers began to think this way, then the steps to improve MVP security become logical. With the right production safety tools in place, MVPs can then be benchmarked against a proper set of longitudinal security metrics focused on percentage of exploitable security flaws allowed to slip through. Metrics can provide accountability and transparency. The ultimate measure of success is when MVPs and the teams that make them are judged not just on speed to market and benchmark product performance but in downstream failures. That means MVP has shifted left and everyone is better off.
The post MVP does not have to mean “Most Vulnerable Product” appeared first on SD Times.
from SD Times https://ift.tt/Q9ytoGJ
Comments
Post a Comment