Skip to main content

Posts

Validating names in databases with the help of Melissa’s global name verification service

Companies that are collecting data need to ensure that the data is valid in order to actually make good use of it. And making sure they have the correct names in their database can help establish a good customer relationship by supporting a customer’s sense of identity.  Think back to times when you’ve signed up for a service and then you get an automated email that says “dear x” instead of your name, or perhaps lists your last name, not your first. It’s easy to fill out a form incorrectly and thus have your information incorrectly listed in a company’s database.  When situations like this happen and a company reaches out using this incorrect information, it can be bad for the brand’s reputation. Therefore, validating database names can be beneficial. Validating names, however, isn’t the easiest process. Unlike email validation where there’s a specific format an address has to follow, or address verification where there is a set number of valid addresses, the possibilities for differ

What NIST’s newly approved post-quantum algorithms mean for the future of cryptography

Quantum computing has the potential to one day break today’s encryption algorithms, requiring a proactive approach to finding new and innovative ways to protect data that can withstand those upcoming capabilities  Last week, NIST announced the approval of three post-quantum cryptographic algorithms that cannot be broken by a quantum computer.  Here’s what leaders in the industry have been saying about this announcement and what it means for the future:  Tim Hollebeek, industry and standards technical strategist at DigiCert : “Today’s quantum computers are small and experimental, but they are rapidly becoming more capable, and it is only a matter of time before cryptographically-relevant quantum computers (CRQCs) arrive. These are quantum computers that are powerful enough to break the asymmetric cryptography used to protect communications and devices on the internet, and they could arrive in as little as 5-10 years. The good news is that the problem can be solved by switching t

Deepfakes: An existential threat to security emerges

For quite some time, discussion around the dangers of deepfakes were mostly rooted in the hypothetical — focusing on the question of how these tools could be used to cause harm, rather than real-world instances of misuse. However, it wasn’t long before some of those fears became realities. In January, a number of New Hampshire residents received a campaign call featuring a deepfaked voice simulation of President Biden urging voters to skip voting in the state’s Democratic primaries. In a year in which nearly 40% of the world’s nations are holding elections, this AI-enabled technology is increasingly being seized upon as a means of manipulating the masses and tipping the scales of public opinion in service of particular political parties and candidates. The Most Immediate Threats With that said, perhaps the most oft-overlooked threat posed by deepfake technologies operates almost entirely outside the political realm — cybercrime. What’s worse, it may well be the most mature appl

Penpot – SD Times Open Source Project of the Week

Penpot is a tool designed to bridge the gap between designers and developers and provide more collaboration throughout the process. According to the project maintainers, there can be difficulty in handing off projects from designers to developers because they are often using different tools and the vision of the project may get lost along the way.  In Penpot, the interfaces that designers create get expressed as code so that developers can pick up right from where the designers left off, eliminating some of those struggles that are common to the hand off process.  It features a web-based multiplayer mode that allows for real-time collaboration because multiple people can be working within the same design at once. There is also a whiteboard feature that allows designers and developers to come together to brainstorm, test ideas, and organize workshops.  Designers can also save their design elements and components so that they can easily be reused, allowing for better consistency bet

CockroachDB retires self-hosted Core offering, makes Enterprise version free for companies under $10M in annual revenue

CockroachDB has announced a change to its licensing model that will allow startups and smaller companies to use it for free, while still making money off the larger companies that can afford to pay for it.  Previously, CockroachDB was available as a free Core version that was designed more for individuals and startups, and a paid Enterprise version that was designed for larger companies and offered advanced capabilities for scaling.  Starting November 18, 2024, CockroachDB Core will be retired and the company will only offer CockroachDB Enterprise, which will be free to individual developers, students, and companies that make less than $10 million in annual revenue. For companies that don’t qualify for free usage, there is now a 30 day free trial that they can use.  These changes only apply to the self-hosted version; the cloud version will remain the same, the company claimed.  According to Cockroach Labs, this change gives everyone — regardless of if they are paying customers — a

AI Regulations are coming: Here’s how to build and implement the best strategy

In April 2024, the National Institute of Standards and Technology released a draft publication aimed to provide guidance around secure software development practices for generative AI systems. In light of these requirements, software development teams should begin implementing a robust testing strategy to ensure they adhere to these new guidelines. Testing is a cornerstone of AI-driven development as it validates the integrity, reliability, and soundness of AI-based tools. It also safeguards against security risks and ensures high-quality and optimal performance. Testing is particularly important within AI because the system under test is far less transparent than a coded or constructed algorithm. AI has new failure modes and failure types, such as tone of voice, implicit biases, inaccurate or misleading responses, regulatory failures, and more. Even after completing development, dev teams may not be able to confidently assess the reliability of the system under different condition

GitHub’s Copilot Autofix generates remediation fixes for code vulnerabilities

GitHub is rolling out a new feature to not only help developers find vulnerabilities, but fix them quickly.  Copilot Autofix in GitHub Advanced Security (GHAS) analyzes vulnerabilities, explains their importance, and offers suggestions on how to remediate them.  “For developers who aren’t necessarily security experts, Copilot Autofix is like having the expertise of your security team at your fingertips while you review code,” Mike Hanley, chief security officer and SVP of engineering at GitHub, wrote in a blog post .   When GHAS finds a vulnerability, there is now a button that developers can click and have Copilot Autofix generate a fix. Then, developers can either dismiss the suggestion or have it create a new pull request with a code change that remediates the issue.  It can generate fixes for dozens of classes of vulnerabilities, including SQL injection and cross-site scripting.  Copilot Autofix was first introduced as a public beta in March, and according to the company, bet

Anthropic introduces prompt caching to reduce latency and costs

Anthropic has introduced a new feature to some of its Claude models that will allow developers to cut down on prompt costs and latency. Prompt caching allows users to cache frequently used context so that it can be used in future API calls. According to the company, by equipping the model with background knowledge and example outputs from the past, costs can be reduced by up to 90% and latency by up to 85% for long prompts. There are several use cases where prompt caching would be useful, including being able to keep a summarized version of a codebase for coding assistants to use, providing long-form documents in prompts, and providing detailed instruction sets with several examples of desired outputs.  Users could also use it to essentially converse with long-form content like books, papers, documentation, and podcast transcripts.  According to Anthropic’s testing, chatting with a book with 100,000 tokens cached takes 2.4 seconds, whereas doing the same without information cached

Infragistics Ultimate 24.1 adds React code generation to App Builder

Infragistics has announced the latest release of its UX/UI platform, Infragistics Ultimate. The 24.1 release includes new features such as React code generation in App Builder, advanced Grids for major web frameworks, and better data visualizations.  “Whether you’re a solo developer or part of a large digital product team, Infragistics Ultimate offers the tools you need to create beautiful, high-performance apps more efficiently,” said Jason Beres, SVP of developer tools at Infragistics. “Our new iteration brings a host of new features that significantly improve, streamline and modernize app building.” The low-code platform App Builder now has the ability to generate code for React, which simplifies the design-to-code process for developers building with that framework.  App Builder also now has the concepts of Variables, State Management, and Selection events, which will enable Master-Detail design patterns to be created within the platform, which wasn’t previously possible. Maste

Google begins rolling out voice capabilities in Gemini with Gemini Live

Google is trying to make its AI assistant Gemini more useful by adding a conversation mode called Gemini Live, similar to how conversations in ChatGPT work. Gemini Live has a voice mode, so that users can speak their questions out loud rather than typing. This voice mode works even when the app is in the background or the phone is locked, which allows conversations to happen even when the user isn’t directly interacting with the Gemini app.  According to Google, users can also interrupt Gemini as it is reading its response out to ask follow-up questions.  “For years, we’ve relied on digital assistants to set timers, play music or control our smart homes. This technology has made it easier to get things done and saved valuable minutes each day. Now with generative AI, we can provide a whole new type of help for complex tasks that can save you hours. With Gemini, we’re reimagining what it means for a personal assistant to be truly helpful. Gemini is evolving to provide AI-powered mobi

Q&A: Developing software-defined vehicles

Cars today are complex pieces of software. You’ve got the infotainment system connected to your phone. You’ve got the lane keep assist that lets you know when you’re starting to sway from your lane. You may even have a backup alert system that warns you that there’s a person walking near your car. So now, on top of all the other components a car needs to function, software is also now in the mix, creating a complex ecosystem that cannot fail at any point. In the most recent episode of our podcast What the Dev , we were joined by Cameron van Orman, chief strategy & marketing officer and GM of Automotive Solutions at Planview , to talk about how these automakers are managing their software development life cycles. Here is an edited and abridged version of that conversation:  Let’s talk a little bit about the complexity in making these cars happen, the software. What goes into making these autonomous vehicles? As you said, David, it’s very complex. You’re taking an industr

NIST approves first three cryptographic algorithms capable of withstanding quantum computers

The National Institute of Standards and Technology (NIST) has announced its first three post-quantum cryptographic algorithms, which will be able to safeguard data even in the event that quantum computing advances to the point that today’s cryptography can be broken.  “The official publication of these algorithms marks a crucial milestone to advancing the protection of the world’s encrypted data from cyberattacks that could be attempted through the unique power of quantum computers, which are rapidly progressing to cryptographic relevancy. This is the point at which quantum computers will harness enough computational power to break the encryption standards underlying most of the world’s data and infrastructure today,” IBM, who developed two of these new standards, wrote in a statement .  According to NIST, experts predict that we may reach that point of cryptographic relevancy within the next decade. This set of three new algorithms use different math problems that even a quantum co

Open Model Initiative now hosted by Linux Foundation

The Linux Foundation is making it easier for developers to find open-source AI models by hosting the Open Model Initiative (OMI) . The OMI was first launched earlier this summer by Invoke , CivitAI , and Comfy Org as a way to help creators find models that don’t have restrictive licenses.  “From the beginning, we have believed that the right way to build these AI models is with open licenses. Open licenses allow creatives and businesses to build on each other’s work, facilitate research, and create new products and services without restrictive licensing constraints,” the OMI wrote in its initial announcement. The Linux Foundation believes that by bringing the OMI under its umbrella, it is “taking a significant step towards making AI accessible and beneficial for everyone, building an environment where creativity and progress in AI can thrive without barriers,” Jim Zemlin, executive director of the Linux Foundation, said.  Some of the current goals of the OMI are to establish a

Pros and cons of 5 AI/ML workflow tools for data scientists today

With businesses uncovering more and more use cases for artificial intelligence and machine learning, data scientists find themselves looking closely at their workflow. There are a myriad of moving pieces in AI and ML development, and they all must be managed with an eye on efficiency and flexible, strong functionality. The challenge now is to evaluate what tools provide which functionalities, and how various tools can be augmented with other solutions to support an end-to-end workflow. So let’s see what some of these leading tools can do. DVC DVC offers the capability to manage text, image, audio, and video files across ML modeling workflow.  The pros: It’s open source, and it has solid data management capacities. It offers custom dataset enrichment and bias removal. It also logs changes in the data quickly, at natural points during the workflow. While you’re using the command line, the process feels quick. And DVC’s pipeline capabilities are language-agnostic. The cons: DVC’s A

Dioptra – SD Times Open Source Project of the Week

A few weeks ago, the National Institute of Standards and Technology (NIST) released Dioptra , an open source tool for testing the trustworthiness of AI models.  Dioptra offers a common platform for assessing models throughout their life cycle, from when they are being developed to when they are acquired by other parties who want to then ensure trustworthiness.  “Our systems increasingly rely on Machine Learning (ML) algorithms and models to perform essential functions,” NIST wrote in a post . “As users of these systems, we must implicitly trust that the models are working as designed. Establishing the trustworthiness of an ML model is especially hard, because the inner workings are essentially opaque to an outside observer.” NIST defines several characteristics that a trustworthy AI model must have: “valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair – with harmful bias managed.” It offers several