The progress that has been made in generative AI (GenAI) technology is nothing short of astounding, and yet, not without its flaws. But these flaws aren’t surprising when you consider that these artificial neural networks are modeled after something equally as impressive and flawed: human intelligence. As such, GenAI falls victim to many of the same problems that keep litigators in new golf shoes. Namely, poor output and the potential for copyright infringement. Both things can result in a coder feeling less confident about using the code generated by GenAI.
The case for context
GitHub Copilot can help bolster the confidence in code, both in terms of its quality and in mitigating the risk of litigation by citing its sources. If a GenAI tool can display the original sources of the code it is using to generate its output, similar to how an online plagiarism checker links back to original source content, a developer would be in a better position to judge whether that code is from a trusted, friendly source, and not a litigious competitor or unreliable organization.
As Copilot learns how to produce creative output from the great source pool of data lakes and large language models (LLMs), and as the gray matter of its neural networks is further refined by upgrades, it and other GenAI platforms will no doubt smooth out the rough edges of their early days and produce increasingly seamless, more original creative output. As they do so, they will also further blur the lines of copyright infringement, facts, and pure fiction, much the way humans already do.
Creativity has always been somewhat derivative
Humans learn to create from mimicking their family, teachers, mentors and peers. Having absorbed all we can, we begin to produce our own creative work, which often begins with derivative attempts to find one’s own style. With lots of persistence and a bit of talent, we strive to develop in a unique style that sets us apart from our peers and earns us some recognition. This process is played out by musicians, writers, painters, and yes, software developers. As the saying goes, “There is nothing new under the sun.” But with a little creativity, we can put our own spin on our work.
In the creative arts, acts of copyright infringement—both conscious or not—occur quite regularly. There have been countless high-profile music industry copyright infringement cases involving claims of plagiarized pop songs (Marvin Gaye vs. Robin Thicke & Pharrell Williams; Spirit vs. Led Zeppelin; etc.) and works of literary fiction that have sought to define what is protected by copyright law. The most egregious instances may be punished by legal action, especially where there is some compensation to be gained. The fact that these cases take so long to dispute says something about the quality of the derivative work. Were there very clear evidence of copyright infringement, there would be very short work for litigators. It is within the smoothness of the edges, the lines distinguishing one work from another, that originality exists.
GenAI is not at the level of creativity that is required to produce truly brilliant, original work. And that is precisely where the developer skills come into play.
The road to human confidence
Already, GitHub Copilot offers a “Suggestions matching public code” filter that helps the user avoid using direct copies of code snippets by checking surrounding code (within a 150-character limit). It’s a safeguard against blatant copying that can help to reduce liability for plagiarism. But only an experienced developer has the judgment to know when proposed code is ultimately usable. Between its reliance on poor sources to its proclivity for outright hallucinations, GenAI cannot be trusted to write code without human oversight. Its role as an assistant, however, is clearly valuable, particularly when asked the right questions in the right syntax.
The potential for GitHub Copilot and other GenAI technologies to make light work of creative work from prose, poetry and song to executable computer code is increasing every day. What these technologies lack is the judgment to know when something is unique and of high quality. What these machines lack in confidence, they provide to the developer in insight and quickly generated ideas. Together, humans and AI can make a very impressive team.
The post GitHub Copilot has a confidence problem appeared first on SD Times.
from SD Times https://ift.tt/dw7gbZL
Comments
Post a Comment