Skip to main content

Posts

Showing posts from August, 2024

August 2024: People on the Move

A number of companies have announced major changes to their executive leadership last month. Here are a couple of the moves across the industry this past month. BugCrowd hires Braden Russell as new CPO As CPO, Russell will be in charge of all R&D groups at the company, including product, engineering, infrastructure, and data science.  He was most recently chief technology officer of the CrowdStrike’s Falcon platform and SIEM solution. He’s also held leadership roles at Foundstone, McAfee, Intel Security, and Cylance.  “As we continue to rapidly accelerate, drive innovation on our platform, and plan additional acquisitions, it became clear that we needed a leader with the experience of scaling large SaaS platforms to hundreds of millions in revenue,” said Dave Gerry, CEO of Bugcrowd. “Braden is a cybersecurity leader with 20 years of both strategic and technical experience. I’m eager to work closely with him as we continue our rapid momentum alongside customers, hackers, and par

Daytona – SD Times Open Source Project of the Week

Daytona is an open source tool for setting up development environments in one command.   “Setting up a dev environment can feel like starting a car in the 1900s—engaging the handbrake, adjusting the fuel valve, mixture control, spark advance, choke, and throttle, turning the ignition, and often running into issues. With Daytona, it’s like starting a car in 2024: any driver can just push a button and go. Enabling developers to focus on what truly matters—writing code and building innovative solutions,” Ivan Burazin, CEO and co-founder of Daytona , wrote in a blog post .  Development environments in Daytona are called Workspaces and they are reproducible, meaning that configurations and settings can be done once and then carried over. Currently, Workspaces are based on the Dev Container standard, but the project’s documentation claims that there is the potential to base it on other standards down the line, like Dockerfiles, Docker Compose, Nix, and Devfile.  Daytona can run on any

Survey reveals the usage and perception of AI in the overall workforce, beyond just the realm of software development

While many surveys on software developers have found that a majority of them are using AI to generate code, the story is different in the workforce overall. Zoho Corporation’s recently released Data Privacy and Generative AI Usage report reveals that only 24% of employees are using generative AI on a daily basis, and 37% said they had never tried it at all. Zoho’s survey, conducted in collaboration with CRM Essentials, was based on responses from 1,000 US employees in a range of industries, company sizes, and roles.  Compare these findings to GitHub ’s claims that 92% of US developers are using AI coding tools, CodeSignal ’s findings that 81% of developers are using them, or Stack Overflow ’s study, which found 76% of respondents are using or planning to use AI coding assistants.  According to this survey, of those using generative AI, 18% use it only at work, 32.5% use it mostly at work, and 22.9% use it for both work and personal reasons. Of the respondents using AI at work, h

eBPF Foundation Announces $250,000 in Grant Awards for Five eBPF Academic Research Projects

The eBPF Foundation, which focuses on advancing the state of the art for eBPF by directing upstream development, promoting the use of the technology and its benefits, and improving the security and robustness of eBPF as a whole, has awarded five universities each a $50,000 unrestricted grant to perform research to benefit the eBPF community. Twenty-five proposals were submitted by 20 universities for technical projects to develop new features and improvements for eBPF. The eBPF Foundation originally planned one grant, but was able to increase the awards to five due to the significance of the work being proposed. The five winners were selected after a detailed review of all proposals by the eBPF Steering Committee, which consists of lead maintainers in the eBPF ecosystem. eBPF is a technology with origins in the Linux kernel that can run sandboxed programs in a privileged context such as the operating system kernel. It has been used in production for over half a decade at scales from s

Sentence embedding technology in the age of artificial intelligence

Imagine searching for a crucial piece of information in a traditional search engine, only to be overwhelmed with thousands of irrelevant results. This limitation is especially problematic in critical industries like nuclear power, where precision and reliability are paramount. Enter sentence embeddings—a powerful, yet often overlooked technology that is set to transform how we access and utilize information. Targeted sentence embedding technology represents a significant leap forward in search platform capabilities. Instead of relying on simple keyword matching, sentence embeddings convert sentences into vector representations, enabling a deeper, more contextual understanding of queries. This means search results are not just relevant but precise, capturing the true intent behind a query . Historically, search technology has evolved from simple keyword matching to more sophisticated semantic search. This evolution has been driven by the need to improve accuracy and relevance, especi

Podcast: Misconceptions around Agile in an AI world

In this week’s episode of our podcast, What the Dev? , we spoke with David Ross, Agile evangelist for Miro , about some of the misconceptions people have about Agile today, and also how Agile has evolved since its early days. Here is an edited and abridged version of that conversation: Where do you see the change from people doing Agile and thinking they understood it, to now? What do they have to take into consideration for this new modern era? I have been in software development for almost 20 years, and it’s been an interesting evolution for me to watch what Agile meant maybe 15-20 years ago versus how it’s perceived today. I just remember back in the early days of some of the very first Agile transformations that I was part of, it was very much all about following a process and having fealty to specific frameworks, be it Scrum or Kanban or whatever the case might be. And the closer you were to perfection by following those frameworks, the closer you were to God, as it were,

Broadcom adds on-premises version of its enterprise agility platform Rally

Broadcom today expanded its enterprise agility platform, Rally, with a new on-premises version called Rally Anywhere . Previously, Rally was only available as a SaaS offering, but this new on-premises version is designed specifically to enable companies that operate globally to plan, prioritize, manage, track, and measure the value they are delivering to customers while still maintaining security and compliance.  Rally Anywhere provides data sovereignty, meaning that data stays within the physical borders of where it originated, which allows companies to comply with international data protection regulations and alleviate data residency concerns. It also provides the flexibility and scalability that is necessary for teams that are split up across multiple time zones and geographic locations to work together collaboratively.  According to Broadcom, with this announcement, the company’s entire ValueOps Value Stream Management Solution is now available as either a SaaS or on-premise

Pinecone previews new bulk import feature for its serverless offering

Pinecone , a vector database for scaling AI, is introducing a new bulk import feature to make it easier to ingest large amounts of data into its serverless infrastructure.  According to the company, this new feature, now in public preview, is useful in scenarios when a team would want to import over 100 million records (though it currently has a 200 million record limit), onboard a known or new tenant, or migrate production workloads from another provider into Pinecone.  The company claims that bulk import results in six times lower ingestion costs than comparable upsert-based processes. It costs $1.00/GB, and, for instance, ingesting 10 million records of 768-dimension costs $30 with bulk import.  RELATED: Pros and cons of 5 AI/ML workflow tools for data scientists today Because it is an asynchronous, long-running process, customers don’t have to performance tune or monitor the status of their imports; Pinecone takes care of it in the background.  During the import process, data

Slack’s Workflow Builder gets several updates for making it easier to add automations

Salesforce today announced several updates to Slack’s no-code automation tool, Workflow Builder, that make it easier for users to build automations in Slack.  “These new features make the Slack platform even more powerful for every customer, giving both developers and end users the tools they need to easily automate any business process across their work apps, directly in the place they’re already working,” Rob Seaman, chief product officer for Slack.  Workflows in Slack can now be created from events that occur in third-party apps, such as PagerDuty, Asana, and Bitbucket. It used to only support starting workflows from events that occurred directly in Slack or Salesforce. The new functionality would, for example, enable a user to create a Slack workflow from a PagerDuty ticket. This workflow could automate an incident channel being created in Slack, adding team members, setting up a canvas, and sharing relevant information from the PagerDuty ticket. “Team members get the context t

Transition application code to images with Cloud Native Buildpacks

Much of the conversation in the software industry is around developer experience. From new ways to measure productivity to reducing important but drudge work, organizations are looking to make life more joyful for developers. One area that’s gaining more attention is the use of buildpacks to create apps for cloud-native environments. Though not a new concept – buildpacks have been around for about 15 years – they can ease the burden on developers by simply taking source code and turning it into fully functional apps. A quick history, according to Ram Iyengar, chief evangelist at Cloud Foundry : Heroku brought up the concept of creating immutable objects from source code, regardless of programming language or platform, in 2010. Cloud Foundry (the open source project) was working to do much the same thing, but as open source. Pivotal was an early backer and developer of the Cloud Foundry project as a commercial tool, and both projects released a v2 in 2015. But when Pivotal was acquire

Teable – SD Times Open-Source Project of the Week

Teable is a no-code database that provides a spreadsheet-like interface for entering data and creating applications from that data. It includes all of the features you’d expect from a spreadsheet application, but with the performance of a database.  Some common use cases include building a CRM, an application for office inventory management, or an employee directory — all of which Teable has templates for getting started quickly from.  “Databases and spreadsheets are like brothers, one is serious and high-performing, offering excellent performance and providing powerful data support for applications, but requiring developers to drive through code. The other is flexible and easy to use, a fundamental infrastructure for data processing in the modern office system, almost a basic skill for everyone. We created an organic combination of the two, not only converting ordinary editing operations into a what-you-see-is-what-you-get experience but also organically inheriting the mechanism

OSI releases latest draft of Open Source AI Definition

For the past two years, the Open Source Initiative (OSI) has been working on developing a definition for Open Source AI that the industry can use to determine which models are actually considered open, and it has a new draft of the definition to share as it nears the final release in October.  According to the organization back in May, there are many companies out there claiming their models are open source when they might really not be. Having a definition will make it easier for developers to make those determinations themselves.  Draft version 0.0.9 further clarifies the components for Open Source models and Open Source weights, and states that all components of a system need to meet the open source standard in order for that system to be considered open source. The OSI has also decided that training data won’t play a role in classification. “After long deliberation and co-design sessions we have concluded that defining training data as a benefit, not a requirement, is the best

Prioritizing your developer experience roadmap

If there’s one thing a platform engineering team doesn’t lack, it’s ideas. When your customers are your colleagues and friends, you have an ever-expanding wishlist to improve developer experience — you only have to ask!  But as with any product team, you have limited resources and the need to balance both business and engineering objectives. So many stakeholders inform your developer experience roadmap that it can be difficult to prioritize. Yes, you need a roadmap  The biggest thing that distinguishes platform engineering from the top-down platforms of tech days of yore? Nobody has to use it.  When you’re building any developer experience tooling — whether it’s an internal developer platform or portal or just a directory or better documentation — you have to build something that your engineers actually want to use. Your platform strategy — sometimes called a developer experience or DevEx strategy — should make developer lives so much easier that they need a really good reaso

GitHub: Perceptions of AI vary widely in different countries

GitHub’s annual developer survey, which it launched for the first time last year, shows that a majority of developers (97%) have used an AI coding tool at some point, but perceptions of the technology vary country to country.  For its survey , GitHub surveyed 2,000 developers from the U.S., Brazil, Germany, and India, with an equal number of participants from each country.  The company found that the usage of AI coding tools at work was consistent among the countries, but companies’ attitudes and policies for AI vary widely. For instance, in the U.S., 88% of respondents said their company supports AI use at some level, while in Germany only 59% said the same.  Developers from the U.S. and India in particular believe AI increases code quality, with 90% and 81% of respondents, respectively, saying it either significantly or somewhat increases quality. In Brazil, only 62% believe AI improves quality, and in Germany, only 60% said so.  Brazilian and German devs also tended to be more

Addressing AI bias in AI-driven software testing

Artificial Intelligence (AI) has become a powerful tool in software testing, by automating complex tasks, improving efficiency, and uncovering defects that might have been missed by traditional methods. However, despite its potential, AI is not without its challenges. One of the most significant concerns is AI bias, which can lead to false results and undermine the accuracy and reliability of software testing.  AI bias occurs when an AI system produces skewed or prejudiced results due to erroneous assumptions or imbalances in the machine learning process. This bias can arise from various sources, including the quality of the data used for training, the design of the algorithms, or the way the AI system is integrated into the testing environment. When left unchecked, AI bias can lead to unfair and inaccurate testing outcomes, posing a significant concern in software development. For instance, if an AI-driven testing tool is trained on a dataset that lacks diversity in test scenarios o

Podcast: AI testing AI? A look at CriticGPT

OpenAI recently announced CriticGPT, a new AI model that provides critiques of ChatGPT responses in order to help the humans training GPT models better evaluate outputs during reinforcement learning from human feedback (RLFH). According to OpenAI, CriticGPT isn’t perfect, but it does help trainers catch more problems than they do on their own. But is adding more AI into the quality step such a good idea? In the latest episode of our podcast , we spoke with Rob Whiteley, CEO of Coder, about this idea.  Here is an edited and abridged version of that conversation: A lot of people are working with ChatGPT, and we’ve heard all about hallucinations and all kinds of problems, you know, violating copyrights by plagiarizing things and all this kind of stuff. So OpenAI, in its wisdom, decided that it would have an untrustworthy AI be checked by another AI that we’re now supposed to trust is going to be better than their first AI. So is that a bridge too far for you? I think on the surf

OpenAI launches fine-tuning for GPT-4o

Developers will now be able to fine-tune GPT-4o to get more customized responses that are suited to their unique needs. With fine-tuning, GPT-4o can be improved using custom datasets, resulting in better performance at a lower cost, according to OpenAI. For example, developers can use fine-tuning to customize the structure and tone of a GPT-4o response, or have it follow domain-specific instructions. According to OpenAI, developers can start seeing results with as few as a dozen examples in a training data set. “From coding to creative writing, fine-tuning can have a large impact on model performance across a variety of domains. This is just the start—we’ll continue to invest in expanding our model customization options for developers,” OpenAI wrote in a blog post. The company also explained that it has put in place safety guardrails for fine-tuned models to prevent misuse. It is continuously running safety evaluations and monitoring usage to ensure that these models are meeting

Microsoft shares first public preview of SharePoint Framework 1.20

Microsoft has released the first public preview of the upcoming SharePoint Framework 1.20 release, which is expected to reach general availability in September. SharePoint Framework enables developers to build applications or experiences in other Microsoft 365 apps that utilize data from SharePoint.  The 1.20 preview includes two main additions, both related to building for the employee engagement platform Microsoft Viva.  First, it is introducing new data visualization options — bar, pie, and donut chart types — for the Viva Connections card view.  This could allow for dashboards that include a donut chart that shows departments and their budgets or a bar chart that shows the company’s sales distribution across different products, Sri Harsha Chilukuri, a software engineer at Microsoft, explained in a video presentation .  Next, it added support for HTML Quick Views for Viva Connections cards, which, as the name implies, provide a quick view of information, such as a listing of r