Skip to main content

Posts

ScaleOut Software Delivers Next-Generation Caching

ScaleOut Software today is introducing Version 6 of its ScaleOut Product Suite, distributed caching and in-memory data grid software. This release introduces breakthrough capabilities not found in today’s distributed caching software products. At its core is ScaleOut Active Caching, a new technology that boosts performance by running application code directly within the distributed cache. It enables faster, more scalable applications across on-premises and cloud environments, benefitting industries from e-commerce and financial services to transportation, gaming, and beyond. ScaleOut Software’s version 6 lets users host modules of application code and run them within the distributed cache. To enable fast execution, a copy of each module runs on all servers within the cache and simultaneously processes application requests. The product also provides a new management UI for dynamically deploying modules, monitoring performance, and visualizing cached data. ScaleOut Active Caching dep...

Java 25 LTS is now available with features like module import declarations, compact source files

Java 25 was released today as the latest Long Term Support (LTS) version of the language, meaning it will be supported by Oracle for at least eight more years. Oracle releases a new LTS version of Java every two years, and the previous LTS release was Java 21, back in 2023. “If you look back from 21 to 25, and all the things that have come in 22, 23, and 24, then I believe 25 definitely could be one of the first versions where people are looking at it and going ‘I can’t not get to 25, I can’t stay back,’” said Chad Arimura, VP of Java Developer Relations at Oracle. New language features This release introduces several stable language features, including module import declarations, compact source files and instance main methods, and flexible constructor bodies. Module import declarations allow developers to import all of the packages exported by a module, without that module needing to contain importing code. This functionality will make it easier for developers to reuse libraries,...

Microsoft shares Insiders preview of Visual Studio 2026

Microsoft has launched its Insiders preview program for Visual Studio 2026, providing insights into what developers can expect from the upcoming release. One of the main highlights is that the company plans to integrate AI even further into the IDE, describing it as being “woven into the daily rhythms of coding” as opposed to being “bolted on.” For example, when opening a new codebase, the IDE will suggest the kind of tests that are typically written in the repo and keep docs and comments consistent with the code. “Code reviews start with clear, actionable insights about correctness, performance, and security – on your machine, before you ever open a pull request. Through it all, you stay in control. The IDE takes the busy-work; you keep the judgment. The result is simple: you move faster, and your code gets better,” Microsoft wrote in a blog post . Microsoft also says that performance will be significantly improved across all areas, from opening solutions to navigating code to bu...

Honeycomb launches AI observability suite for developers

Observability provider Honeycomb has launched an AI-native observability suite optimized for developers. The new AI-powered Honeycomb Intelligence accelerates debugging and code delivery by bringing observability into the IDE, improves investigations with an interactive co-pilot, and automatically detect performance anomalies. Honeycomb Intelligence provides a collaborative assistant that can deliver sub-second query responses across billions of events—performance that makes real-time AI assistance possible, according to the company. Honeycomb’s event-based observability model means AI insights get richer as your systems grow more complex, not slower or more expensive. Honeycomb Intelligence introduces three new products that address critical needs in modern engineering workflows: Honeycomb MCP Server accelerates debugging and code delivery by bringing Honeycomb’s powerful observability model directly into AI-powered IDEs such as Cursor and Claude Code. Developers can investi...

ServiceNow unveils Zurich AI platform

ServiceNow   today unveiled its Zurich platform release , designed to deliver breakthrough innovations with faster multi‑agentic AI development, enterprise‑wide AI platform security capabilities, and reimagined workflows. New intelligent developer tools enable secure vibe coding with natural language to help turn employees into high‑velocity builders and creators and lower the barrier to app creation. Built‑in security capabilities, including ServiceNow Vault Console and Machine Identity Console, natively secure sensitive data across workflows and govern integrations to help organizations scale agentic AI and innovations with confidence. The introduction of autonomous workflows turns data into action through agentic playbooks, uniquely offering the flexibility to apply AI and human input in workflows where and when it’s needed for greater control and efficiency. Enterprise leaders are racing to move beyond table‑stakes AI implementations to unlock transformative, tangible results...

Progress Software unveils RAG-as-a-Service platform

Following its acquisition of retrieval-augmented generation AI solution innovator Nuclia in June, Progress today is announcing the launch of Progress Agentic RAG, a RAG-as-a-Service platform designed to empower companies to transform unstructured data into actionable intelligence. The retrieval-augmented generation platform can help organizations struggling to get value from Large Language Models by providing results based on business data, both structured and unstructured, according to the company announcement. “Progress Agentic RAG is redefining how businesses interact with their data,” said Yogesh Gupta, CEO of Progress Software, in a statement. “By combining agentic intelligence with retrieval-augmented generation, we’re making AI practical, scalable and trustworthy for every organization. This platform unlocks the power of unstructured data—across formats and languages—through verifiable, no-code AI search. We believe Progress Agentic RAG is the easiest-to-use solution on the ma...

Surviving the AI Takeover in QA: How to Join the Top 1%

A recent email from ASTQB warned testers that to survive in an AI-driven world, they’ll need “broad testing knowledge, not just basic skills.” The advice isn’t wrong—but it misses the bigger picture. The real disruption is already here, and it’s moving faster than most realize. AI systems like  AI Script Generation (AISG)  and  GENI  are already generating, executing, and maintaining test cases hundreds of times faster than humans. In fact, enterprises are deploying these AI-first platforms today, running  thousands of tests in hours  with no recorders, no scripting, and no human intervention. This means the very roles ASTQB says you should “protect” by gaining broader certification—manual test case writers, Selenium scripters, recorder users—are already obsolete. The work has shifted to the machines. So the critical question is:  what does it actually take to survive and thrive in QA now? The End of the Tester-as-Scripter Era Let’s be blunt. If y...

The Value-Driven AI Roadmap

While the industry is racing to develop and implement artificial intelligence into its systems, cultural resistance, a skills gap, and the speed with which AI is changing are just a few of the factors why many AI projects fail. Because of that, most attempts at adopting AI into organizations never make it past the pilot stage, according to Lance Knight, chief value stream architect at ValueOps by Broadcom. Part of the problem is that the technology that is supposed to make you more effective at speed and scale is itself changing all around you at the same time. Machine learning and natural language processing have been understood and in use for quite some time now, but generative AI and the use of agents have taken it all to a different level. And because of that, C-suites are insisting their organizations need to adopt AI now, without even having a good plan in place for where it can be used to provide the most value. Nor, in many cases, do their current employees have the skills to...

This week in AI updates: Mistral’s new Le Chat features, ChatGPT updates, and more (September 5, 2025)

Mistral announces new connectors, Memories Mistral announced that its generative AI chat Le Chat now connects with over 20 new connectors, including tools like Asana, Atlassian, Box, Databricks, GitHub, Outlook, Snowflake, Stripe, and Zapier. Users will also now be able to add their own connectors via MCP. The company also announced a beta for Memories, which allows users to set preferences to get more personalized responses. They can also import their memories from ChatGPT. Both of these features are available for any Le Chat user, including free users. OpenAI adds several minor updates to ChatGPT The company announced that users can now branch off conversations in ChatGPT to explore a specific direction while preserving the direction of the original thread. Additionally, Projects are now available to free users, and the company has added larger file uploads per project, the option to select colors and icons, and project-only memory controls. Google announces new open embeddin...

Beyond the benchmarks: Understanding the coding personalities of different LLMs

Most reports comparing AI models are based on benchmarks of performance, but a recent research report from Sonar takes a different approach: grouping different models by their coding personalities and looking at the downsides of each when it comes to code quality. The researchers studied five different LLMs using the SonarQube Enterprise static analysis engine on over 4,000 Java assignments. The LLMs reviewed were Claude Sonnet 4, OpenCoder-8B, Llama 3.2 90B, GPT-4o, and Claude Sonnet 3.7. They found that the models had different traits, such as Claude Sonnet 4 being very verbose in its outputs, producing over 3x as many lines of code as OpenCoder-8B for the same problem. Based on these traits, the researchers divided the five models into coding archetypes. Claude Sonnet 4 was the “senior architect,” writing sophisticated, complex code, but introducing high-severity bugs. “Because of the level of technical difficulty attempted, there were more of these issues,” said Donald Fischer,...

Neo4j introduces new graph architecture that allows operational and analytics workloads to be run together

The graph database company Neo4j today announced Infinigraph, a new distributed graph architecture that allows Neo4j’s database to run both operational and analytical workloads in one system. According to the company, silos often keep these workloads separate, leading AI applications to suffer, decision-making to be delayed, and costs to increase as a result of complex integration. Currently, some of the workarounds companies go through to bring these workloads together include having one database and one copy of data, one database and two engines (column-based and row-based), or having two or more synchronized databases. “Infinigraph eliminates the need for these workarounds. It enables organizations to run both analytical and transactional workloads in the same system, at unprecedented scale, while avoiding ETL pipelines, sync delays, and redundant infrastructure,” the company wrote in a blog post . Some examples of use cases that Infinigraph unlocks include the ability to detect ...

Kong Acquires OpenMeter to Unlock AI and API Monetization for the Agentic Era

Kong Inc ., a leading developer of cloud API and AI technologies, today announced the acquisition of OpenMeter , a leading open-source and SaaS platform for usage-based metering and billing. The acquisition will bring usage-based monetization capabilities to Kong Konnect, the unified API platform. This will enable organizations to productize and bill for their APIs, AI, and data streams, seamlessly turning digital assets into new sources of revenue. In the AI era, billing becomes a metering problem. Agents will exchange labor via APIs. Large Language Models (LLM) – if not chats – are already sold via APIs. This requires a new kind of platform that brings together API infrastructure and monetization all in one. A unified approach to enforcement, security, and monetization. As AI adoption accelerates, digital connections are no longer deterministic or limited to the pace of human activity. Instead, they are continuous, machine-driven, and orders of magnitude larger. AI agents can trigg...

Cloudsmith launches ML Model Registry to provide a single source of truth for AI models and datasets

Cloudsmith, providers of an artifact management platform, announced its ML Model Registry , which can act as a single source of truth for all AI models and datasets a company is using. The registry integrates with the Hugging Face Hub and SDK so that developers can push, pull, and manage models and datasets from Hugging Face and then use Cloudsmith to maintain centralized control, compliance, and visibility. Once data has been pushed from Hugging Face to Cloudsmith, security and compliance data can be utilized by Enterprise Policy Management so that teams can apply consistent policies to automatically quarantine, block, and approve specific models. It can also integrate with training, validation, and deployment pipelines, and provides protection of proprietary models and datasets via fine-grained access controls, entitlement tokens, and audit trails. Models and datasets are also managed in the same repositories as a company’s other artifacts, and can be organized by project, enviro...

Microsoft Graph CLI to be retired

Microsoft has announced it is going to be retiring the Microsoft Graph CLI, with a deprecation phase starting now and full retirement scheduled for August 28th, 2026. During the deprecation phase, Microsoft will not add any new features and will only address critical vulnerabilities. According to Microsoft, this change is part of the company’s efforts to streamline the developer experience for Microsoft Graph by focusing its attention on PowerShell. The company recommends that users begin switching over to the Microsoft Graph PowerShell SDK, which offers broad API coverage and regular updates, integration with scripting and automation workflows, community and documentation support, and long-term support with Microsoft’s servicing commitments. The company explained that it initially released the CLI to offer a lightweight, cross-platform tool that developers could use to interact with the Microsoft Graph APIs. However, it was experiencing declining usage due to its limited extensibi...

The state of DevOps and AI: Not just hype

Talk to any DevOps vendor today, and they’ll proudly tell you about their AI roadmap. Most vendors have already built something that will tick the checkbox, if that’s among your requirements. But checkboxes don’t solve problems. A feature that’s hard to use or adds extra manual steps to a developer’s processes doesn’t save you anything — and may end up costing you more than you expect. Just like you, vendors today are at the start of their AI journey. In some cases, the proof of concept gets packaged and shipped. The box is checked, the product goes out the door, and now it’s up to you to figure out if it’s worth using. Most DevOps AI Tools Are Still Point Solutions The truth is that nobody’s using one AI solution to address the entire software development lifecycle (SDLC). The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solu...