Skip to main content

Beyond the benchmarks: Understanding the coding personalities of different LLMs

Most reports comparing AI models are based on benchmarks of performance, but a recent research report from Sonar takes a different approach: grouping different models by their coding personalities and looking at the downsides of each when it comes to code quality.

The researchers studied five different LLMs using the SonarQube Enterprise static analysis engine on over 4,000 Java assignments. The LLMs reviewed were Claude Sonnet 4, OpenCoder-8B, Llama 3.2 90B, GPT-4o, and Claude Sonnet 3.7.

They found that the models had different traits, such as Claude Sonnet 4 being very verbose in its outputs, producing over 3x as many lines of code as OpenCoder-8B for the same problem.

Based on these traits, the researchers divided the five models into coding archetypes. Claude Sonnet 4 was the “senior architect,” writing sophisticated, complex code, but introducing high-severity bugs. “Because of the level of technical difficulty attempted, there were more of these issues,” said Donald Fischer, a VP at Sonar.

OpenCoder-8B was the “rapid prototyper” as a result of it being the fastest and most concise while also potentially creating technical debt, making it ideal for proof-of-concepts. It created the highest issue density of all the models, with 32.45 issues per thousand lines of code.

Llama 3.2 90B was the “unfulfilled promise,” as its scale and backing implies it should be a top-tier model, but it only had a pass rate of 61.47%. Additionally, 70.73% of the vulnerabilities it created were “BLOCKER” severity, the most severe type of bug, which prevents testing from continuing.

GPT-4o was an “efficient generalist,” a jack-of-all-trades that is a common choice for general-purpose coding assistance. Its code wasn’t as verbose as the senior architect or as concise as the rapid prototyper, but somewhere in the middle. It also avoided producing severe bugs for the most part, but 48.15% of its bugs were control-flow mistakes.

“This paints a picture of a coder who correctly grasps the main objective but often fumbles

the details required to make the code robust. The code is likely to function for the intended scenario but will be plagued by persistent problems that compromise quality and reliability over time,” the report states.

Finally, Claude 3.7 Sonnet was a “balanced predecessor.” The researchers found that it was a capable developer that produced well-documented code, but still introduced a large number of severe vulnerabilities.

Though the models did have these distinct personalities, they also shared similar strengths and weaknesses. The common strengths were that they quickly produced syntactically correct code, had solid algorithmic and data structure fundamentals, and efficiently translated code to different languages. The common weaknesses were that they all produced a high percentage of high-severity vulnerabilities, introduced severe bugs like resource leaks or API contract violations, and had an inherent bias towards messy code.

“Like humans, they become susceptible to subtle issues in the code they generate, and so there’s this correlation between capability and risk introduction, which I think is amazingly human,” said Fischer.

Another interesting finding of the report is that newer models may be more technically capable, but are also more likely to generate risky code. For example, Claude Sonnet 4 has a 6.3% improvement over Claude 3.7 Sonnet on benchmark pass rates, but the issues it generated were 93% more likely to be “BLOCKER” severity.

“If you think the newer model is superior, think about it one more time because newer is not actually superior; it’s injecting more and more issues,” said Prasenjit Sarkar, solutions marketing manager at Sonar.

How reasoning modes impact GPT-5

The researchers followed up their report this week with new data on GPT-5 and how the four available reasoning modes—minimal, low, medium, and high—impact performance, security, and code quality.

They found that increasing reasoning has a diminishing return on functional performance. Bumping up from minimal to low results in the model’s pass rate rising from 75% to 80%, but medium and high only had a pass rate of 81.96% and 81.68%, respectively.

In terms of security, high and low reasoning modes eliminate common attacks like path-traversal and injection, but replace them with harder-to-detect flaws, like inadequate I/O error-handling. The low reasoning mode had the highest percentage of that issue at 51%, followed by high (44%), medium (36%), and minimal (30%).

“We have seen the path-traversal and injection become zero percent,” said Sarkar. “We can see that they are trying to solve one sector, and what is happening is that while they are trying to solve code quality, they are somewhere doing this trade-off. Inadequate I/O error-handling is another problem that has skyrocketed. If you look at 4o, it has gone to 15-20% more in the newer model.”

There was a similar pattern with bugs, with control-flow mistakes decreasing beyond minimal reasoning, but advanced bugs like concurrency / threading increasing alongside the reasoning difficulty.

“The trade-offs are the key thing here,” said Fischer. “It’s not so simple as to say, which is the best model? The way this has been viewed in the horse race between different models is which ones complete the most number of solutions on the SWE-bench benchmark. As we’ve demonstrated, the models that can do more, that push the boundaries, they also introduce more security vulnerabilities, they introduce more maintainability issues.”

The post Beyond the benchmarks: Understanding the coding personalities of different LLMs appeared first on SD Times.



from SD Times https://ift.tt/chUXrNS

Comments

Popular posts from this blog

A guide to data integration tools

CData Software is a leader in data access and connectivity solutions. It specializes in the development of data drivers and data access technologies for real-time access to online or on-premise applications, databases and web APIs. The company is focused on bringing data connectivity capabilities natively into tools organizations already use. It also features ETL/ELT solutions, enterprise connectors, and data visualization. Matillion ’s data transformation software empowers customers to extract data from a wide number of sources, load it into their chosen cloud data warehouse (CDW) and transform that data from its siloed source state, into analytics-ready insights – prepared for advanced analytics, machine learning, and artificial intelligence use cases. Only Matillion is purpose-built for Snowflake, Amazon Redshift, Google BigQuery, and Microsoft Azure, enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Trusted by companies of all sizes to meet...

Olive and NTT DATA Join Forces to Accelerate the Global Development and Deployment of AI Solutions

U.S.A., March 14, 2021 — Olive , the automation company creating the Internet of Healthcare, today announced an alliance with NTT DATA , a global digital business and IT services leader. The collaboration will fast track the creation of new healthcare solutions to transform the health experience for humans — both in the traditional healthcare setting and at home. As a member of Olive’s Deploy, Develop and Distribute Partnership Programs , NTT DATA is leveraging Olive’s open platform to innovate, build and distribute solutions to Olive’s customers, which include some of the country’s largest health providers. Olive and NTT DATA will co-develop new Loops — applications that work on Olive’s platform to provide humans real-time intelligence — and new machine learning and robotic process automation (RPA) models. NTT DATA and Olive will devote an early focus to enabling efficiencies in supply chain and IT, with other disciplines to follow. “This is an exciting period of growth at Olive, so...

2022: The year of hybrid work

Remote work was once considered a luxury to many, but in 2020, it became a necessity for a large portion of the workforce, as the scary and unknown COVID-19 virus sickened and even took the lives of so many people around the world.  Some workers were able to thrive in a remote setting, while others felt isolated and struggled to keep up a balance between their work and home lives. Last year saw the availability of life-saving vaccines, so companies were able to start having the conversation about what to do next. Should they keep everyone remote? Should they go back to working in the office full time? Or should they do something in between? Enter hybrid work, which offers a mix of the two. A Fall 2021 study conducted by Google revealed that over 75% of survey respondents expect hybrid work to become a standard practice within their organization within the next three years.  Thus, two years after the world abruptly shifted to widespread adoption of remote work, we are dec...