Skip to main content

Posts

Showing posts from December, 2023

6 Common Patch Management Mistakes to Avoid

Patch management is like painting or gardening: At first glance, it may seem like routine and straightforward work. But in practice, it can prove much more challenging than it looks. Just as lack of prep work can spell disaster for a paint job, or forgetting to water and weed regularly can turn your garden into an eyesore, software patching mistakes may severely hamper your ability to carry out what should be the simple task of keeping apps up-to-date. Keep reading for a look at the most common patch management oversights I’ve encountered in my career as an IT director, along with tips on how organizations can avoid them. Not having a patching strategy Probably the most common software patching mistake is lacking a coherent patching strategy. Lack of strategy doesn’t mean that patching doesn’t happen at all. It means that patching occurs in an ad hoc fashion, without clear guidelines in place about when, how and how often an organization will apply patches. To avoid this mista

Year in Review: Security

As we bid farewell to another year, it is crucial to reflect on the threats of cyberattacks and ransomware and think of how to mitigate them moving forward. However, this year feels a bit different – marked by the unknown of what challenges AI will bring to the security landscape in the new year.  This comes on top of persistent supply-chain security vulnerabilities, insider threats, and more that have only grown this year.  The Cybersecurity and Infrastructure Security Agency (CISA) recently unveiled a roadmap with five key efforts aimed at the responsible and secure deployment of AI.  Firstly, the agency commits to responsibly employing AI to fortify cyber defense, adhering to applicable laws and policies. Second, CISA aims to assess and ensure the default security of AI systems, fostering safe adoption across various government agencies and private sector entities. The third effort involves collaborating with companies to safeguard critical infrastructure from potential malicio

How to protect the fundamentals of open source

The concept and promise of open source has been around for decades. It’s thriving in the industry today . While commercialization strategies around open source continue to evolve and disrupt, purveyors of open-source software cannot lose sight of what makes it such a powerful force for innovation. Looking back over the past 20 years of software development, it’s hard to overstate the impact of the open source community. Just about every category of software has been transformed by open source. Businesses in an ever-growing list of industries have embraced open source. Major technology firms have recognized its value and are contributing to projects with financial support, resources, expertise and more.  The wide acceptance of open source, however, makes it more vulnerable to regulatory and commercial influences that could undermine the very principles that have made it so valuable. To be sure, growth is tantamount to change – open source should evolve as it gains more influence in t

What AI Can and Can’t Do For Your Observability Practice

Artificial intelligence (AI) and large language models (LLMs) have dominated the tech scene over the past year. As a byproduct, vendors in nearly every tech sector are adding AI capabilities and scrambling to promote how their products and services use it.  This trend has also made its way to the observability and monitoring space. However, the AI solutions coming to market often feel like putting a square peg in a round hole. While AI can significantly impact certain areas of observability, it is not a fit for others. In this article, I’ll share my views on how AI can and cannot support an observability practice – at least right now. The Long Tail of Errors The very nature of observability makes ‘prediction’ in the traditional sense unfeasible. In life, certain ‘act of God’ types of events can impact business and are impossible to predict – weather-related events, geopolitical conflicts, pandemics, and more. These events are so rare and capricious that it’s implausible to train an

Software predictions for 2024

Generative AI. Observability. Developer platforms. Data management.  These are the issues that our industry will face with heightened urgency in 2024.  SD Times reached out to experts from across the spectrum of software development to find out what they think the new year will bring. Here are some of their thoughts. Devavrat Shah, co-CEO and founder, Ikigai Labs and an MIT AI professor LGMs become the next household gen AI tech in the enterprise. Today, nearly every organization is experimenting with LLMs in some way. Next year, another major AI technology will emerge alongside LLMs: Large Graphical Models (LGMs). An LGM is a probabilistic model that uses a graph to represent the conditional dependence structure between a set of random variables. LGMs are probabilistic in nature, aiming to capture the entire joint distribution between all variables of interest. They are particularly suitable for modeling tabular data, such as data found in spreadsheets or tables. LGMs are useful

Year in Review: Developer productivity

One of the big themes of 2023 was the enterprise struggle to make developers more productive. And the strategies for making that happen included the creation of developer platforms, changes in culture to allow developers to experience joy in their work, and understanding how to measure if a developer or their teams are being productive. Further, the introduction of developer observability into code, the use of value streams to eliminate bottlenecks and gain efficiencies, and the development of AI code assistants all aim to achieve that same goal.  Lots of approaches, but has there been much success? The idea of “shift left,” where testing, security and governance moved into the developer purview, actually created more burdens on developers, which actually slowed productivity. Any number of DevX tools came to market in 2023, but research showed that organizations were buckling under the weight of tool sprawl. And developer platform engineering was seen by many as tying developers’ ha

GitHub Copilot has a confidence problem

The progress that has been made in generative AI (GenAI) technology is nothing short of astounding, and yet, not without its flaws. But these flaws aren’t surprising when you consider that these artificial neural networks are modeled after something equally as impressive and flawed: human intelligence. As such, GenAI falls victim to many of the same problems that keep litigators in new golf shoes. Namely, poor output and the potential for copyright infringement. Both things can result in a coder feeling less confident about using the code generated by GenAI. The case for context GitHub Copilot can help bolster the confidence in code, both in terms of its quality and in mitigating the risk of litigation by citing its sources. If a GenAI tool can display the original sources of the code it is using to generate its output, similar to how an online plagiarism checker links back to original source content, a developer would be in a better position to judge whether that code is from a trus

Year in review: Generative AI takes the tech world by storm

To say 2023 was a year taken up mostly by AI would be a massive understatement. Following the release of ChatGPT at the very end of 2022, generative AI blew up in 2023.  According to Reuters , ChatGPT had grown to 100 million active users by January 2023, which was just two months after it launched.  ChatGPT is a product of OpenAI, which is an AI company that also produces the GPT model that ChatGPT and many other AI technologies run on. OpenAI had quite the successful — and eventful — year.  In January, Microsoft announced a multi-year, multi-billion dollar investment in the company. Other announcements throughout the year included offering a paid version of ChatGPT, an API for ChatGPT , and the release of GPT-4 .  OpenAI also teamed up with Anthropic, Google, and Microsoft to form the Frontier Model Forum in July. The organization is dedicated to safely and responsibly developing frontier AI models. The Forum defines these as “large-scale machine-learning models that exc

Bugcrowd announces rating taxonomy for LLMs

Bugcrowd has announced updates to its Vulnerability Rating Taxonomy (VRT), which categorizes and prioritizes crowdsourced vulnerabilities.  The new update specifically addresses vulnerabilities in Large Language Models (LLMs) for the first time. The VRT is an open-source initiative aiming to standardize how suspected vulnerabilities reported by hackers are classified.  “This new release of VRT not only opens up a new form of offensive security research and red teaming to program participants, but it helps companies increase their scope to include these additional attack vectors,” said Ads Dawson, senior security engineer for LLM platform provider Cohere and a key contributor to the release. “I am looking forward to seeing how this VRT release will influence researchers and companies looking to fortify their defenses against these newly introduced attack concepts.” In 2016, Bugcrowd launched VRT, initially developed as an in-house tool. It has since become an open-source project for

OpenAI creates a framework for understanding and dealing with the risks of advanced AI models

OpenAI shared that it has created the Preparedness Framework to help track, evaluate, forecast, and protect against the risks associated with advanced AI models that will exist in the future, or frontier models.  The Preparedness Framework is currently in beta, and it covers the actions OpenAI will take to safely develop and deploy frontier models.  RELATED CONTENT:  Anthropic, Google, Microsoft, and OpenAI form group dedicated to safe development of frontier AI models OpenAI announces Superalignment grant fund to support research into evaluating superintelligent systems Number one, it will run evaluations and develop scorecards for models, which the company will be continuously updating. During evaluation, it will push frontier models to their limits during training. The results of the evaluation will help both assess risks and measure the effectiveness of mitigation strategies. “Our goal is to probe the specific edges of what’s unsafe to effectively mitigate the revealed risks

Vercel’s generative AI tool for creating UI components is now generally available

Vercel, creator of the Next.js framework, has announced that its v0 offering is now generally available. First announced in October 2023, v0 is a generative AI product that allows you to easily create web interface components based on prompts. It will take your prompt and use open-source tools like React, Tailwind CSS, and Shadcn UI to provide you with a component that you can then integrate into your codebase.  It generates multiple options and you are allowed to select the one that works best as a starting point for you, which you can then edit. The website currently offers example components based on various sample prompts, including a contact form, a cookie consent banner, and a hero section in dark mode.  “v0 transforms website design and development, simplifying the process for everyone,” said Jared Palmer, vice president of AI at Vercel. “This technology accelerates web development, empowering developers and non-technical users alike to craft advanced UIs effortlessly

Reveille 10 software update includes better ECM operations with PostgreSQL support

Reveille 10 is designed to optimize the customer experience for business-critical applications reliant on Enterprise Content Management (ECM) platforms, according to the company in a post .  This release addresses the management challenges organizations face with expanding intelligent automation programs, offering increased visibility into ECM operations and expediting the recovery of automated issues.  Among the noteworthy features introduced in Reveille 10 is native support for the PostgreSQL open-source database system, commonly used in cloud hypervisor environments. This support reduces ongoing support costs and provides organizations with greater flexibility in choosing their database options. A significant enhancement in Reveille 10 is its extension of application management capabilities to cover Hyland OnBase RESTful-based applications. This ensures comprehensive support for modern OnBase web applications, contributing to a seamless user experience.  Additionally, Reveille

OpenAI announces Superalignment grant fund to support research into evaluating superintelligent systems

OpenAI is announcing a new grant program to help companies who are working on making superintelligent systems safe, as the company believes superintelligence could be achieved within the next decade. According to the company, these advanced systems will “be capable of complex and creative behaviors that humans cannot fully understand.”  The current technique for ensuring AI systems are safe — a process called alignment — uses reinforcement learning from human feedback (RLHF). This relies on human supervision, so it may not be as effective when dealing with the advanced use cases that superintelligent AI will facilitate. For instance, if an AI generates millions of lines of complex code, it wouldn’t be possible for humans to evaluate all of it.  “This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them? This is one of the most important unsolved technical problems in the world. But we think it is solvable with a concerted effort. There

Google announces next step in plan to phase out cookies

Google is testing Tracking Protection for Chrome as a feature that limits cross-site tracking. This new feature is part of Google’s Privacy Sandbox initiative to responsibly phase out third-party cookies, which involves creating new tools for sites that serve essential functions and granting developers time to adapt.  The introduction of Tracking Protection begins with a small subset of Chrome users, allowing developers to assess their preparedness for a web environment without third-party cookies. This feature will undergo testing with 1% of Chrome users globally, representing a significant step in the Privacy Sandbox initiative. The ultimate goal is to phase out third-party cookies for all users by the second half of 2024, pending the resolution of any competition concerns from the UK’s Competition and Markets Authority. “As we work to make the web more private, we’ll provide businesses with tools to succeed online so that high quality content remains freely accessible — whether

SD Times Open-Source Project of the Week: ScreenLink

ScreenLink works as an open-source alternative to the screen capture tool, Loom, and it enables users to create and share screen recording demos for customers, support teams, and more in minutes. Users can easily create tutorials and presentations through screen recording, and they can record from their camera and microphone alongside their screen for more engaging and personable demonstrations.  The project is open-source and also has a paid option for users needing more advanced capabilities. The free tier enable users to record and share up to 10 videos, catering to those who use the platform occasionally. For more frequent use and professional requirements, the Pro and Growth plans provide expanded usage options. These plans are tailored to accommodate the diverse needs of users, offering flexibility and scalability in line with their usage patterns. According to the maintainers, the platform has a user-friendly interface, which ensures that individuals of all technical backgro

Report: Slow mobile app releases cost over $100,000 in lost revenue per year for 75% of companies

It’s no surprise that slow development processes are costing companies greatly, but by how much? According to a new report by the mobile testing company Kobiton , 75% of respondents said slow mobile app releases cost their company at least $100,000 each year, and 13% said it costs them between $1 million and $10 million every year.  Additionally, 75% said that mobile apps represent at least a quarter of their companies’ revenue, which highlights the fact that slow releases may threaten the viability of their business, not just their bottom line.  When asked how frequently they release mobile app updates, 38% said weekly, 27% said monthly, 20% said daily, 14% said quarterly, and 1% said less than once per quarter.  In terms of what is causing delays, limited financial resources are the culprit for 50% of organizations. Forty-seven percent also cited inefficient development and QA processes and 40% cited a lack of skilled development and QA labor.  To combat some of these challeng

Using GPS location to obtain or target physical locations

It’s easy to convert a physical address, like 12 Main Street, into its latitude and longitude coordinates, but for many businesses there are situations where you’d want to do the opposite: get the closest physical address of those coordinates. “No one says let’s pull up the property value for latitude 42, longitude 80. They say let’s pull up information for 12 Main Street or 4 Oak Street,” Tim Sidor, data quality analyst at data quality company Melissa , explained in Episode 6 of the SD Times Live! Microwebinar series on data verification .  According to Sidor, the process of converting from latitude and longitude to a verified address is called reverse geocoding. This is done by performing a geospatial proximity search against the database of known locations. It returns the closest address, and then, given the requested search distance, it will incrementally increase the radius until a location is found or the maximum distance is reached.  “In a practical sense, the engine grabs

Copado is bringing generative AI to DevOps with beta for CopadoGPT

Copado, a provider of a Salesforce DevOps platform, has expanded the beta program for its AI platform, CopadoGPT , to all customers after successful internal validation.  CopadoGPT serves as a cross-platform intelligence layer that provides advice and recommendations on Salesforce DevOps best practices, accelerating test generation, enriching user stories, and generating release notes.  The company created the platform to enhance time to value, boost customer satisfaction, and accelerate digital transformation initiatives.  “Copado is an AI-first company with a focus on maximizing the customer experience. The expansion of CopadoGPT’s beta access marks a significant milestone in our journey to embed AI deeply into the fabric of Salesforce DevOps. This AI DevOps advisor is not just a tool; it’s a partner that enhances our customers’ capabilities to innovate rapidly and with confidence,” said Federico Larsen, chief technology officer at Copado. “With dozens of use cases, CopadoGPT is

Google’s Gemini Pro now available to developers via Google AI Studio and Vertex AI

After announcing its new multimodal AI model Gemini last week, Google is making several announcements today to enable developers to build with it.  When first announced , Google said that Gemini will come in three different versions, each tailored to a different size or complexity requirement. In order from largest to smallest, Gemini is available in Ultra, Pro, and Nano versions. Gemini Nano has already seen use in Android in the Pixel 8 Pro and Google Bard is also already using a specialized version of Gemini Pro.   RELATED CONTE NT: Google’s Duet AI for Developers is now generally available Today, Google is announcing that developers can use Gemini Pro through the Gemini API. Initial features that developers can leverage include function calling, embeddings, semantic retrieval, custom knowledge grounding, and chat functionality, the company explained.  There are two main ways to work with Gemini Pro: Google AI Studio and Vertex AI on Google Cloud. Google AI Studio is a web-bas