Powered by MOMENTUM MEDIA
cyber daily logo
Breaking news and updates daily. Subscribe to our Newsletter

Op-Ed: What risks stand before us as the future looks to AI-generated software?

“May you live in interesting times” is a phrase that in many cultures has a double meaning of both being a blessing and curse at the same time.

user iconPhillip Ivancic
Thu, 15 Jun 2023
Op-Ed: What risks stand before us as the future looks to AI-generated software?
expand image

This dualism is exactly how the cyber security industry is looking at artificial intelligence (AI) and large language model (LLM) tools that are at the centre of broader technology conversations in recent months.

As a simple example of AI-related dualism, for many years, Synopsys has been at the forefront of using AI coupled with massive datasets to help our customers automatically triage and prioritise their vulnerability remediation activities.

However, last month, at the Black Hat Asia cyber security conference held in Singapore, Synopsys engineers demonstrated ChatGPT (a common LLM tool) committing software code with undetected “SQL injection” directly into a demo software build. SQL injection is one of the most serious vulnerabilities in software development and, if exploited by a hacker, can lead to malicious code being run on public websites.

============
============

Beyond the cyber security industry, also last month, technology titans like Apple and Samsung announced they are banning ChatGPT on their own networks, publicly citing concerns the new technology may inadvertently expose their highly valuable intellectual property.

Interesting times indeed!

For enterprise software development teams, the dualism will be most apparent as AI will definitely bring new productivity benefits as well as new risks.

Even in its earliest iterations, chatbots such as ChatGPT have been able to respond to strong prompts with potentially highly relevant substance — in seconds. The value this offers is that it can essentially carry out the programming grunt work much faster than junior developers, not to mention that it works 24/7, and it does so without requiring a salary, benefits, or lunch breaks.

The upfront productivity delivered by such AI solutions is the result of having been trained for months — if not years. And as long as the massive amount of training data it relies on is accurate, the output will also be accurate. However, the likelihood is that it won’t yield “perfect” outputs.

Think of it along the same lines as the text autocomplete or autocorrect — functions that are available on your smartphone or email. At times, these features do indeed seem “smart”, but at other times, they may seem rather off the mark and require human intervention and review to ensure they’re conveying the intended and accurate message.

You see, these LLM models are often trained on datasets that are filled with biases and inaccuracies. When it comes to writing code, the output could very likely lack important or required information, or worse, like the SQL injection example above, may be vulnerable.

To offer a tangible example, a team of Synopsys researchers recently demonstrated that code written by GitHub’s generative AI development tool, Copilot, neglected to identify an open-source licensing conflict. Interestingly, Copilot was created in partnership with OpenAI — the same research lab that created ChatGPT — and is described as a descendant of GPT-3.

You may be asking yourself why this is important. Well, ignoring software license conflicts can directly impact your bottom line. One famous example involves Cisco, which failed to comply with the requirements set out by GNU’s General Public License, under which its Linux routers and other open-source software programs were distributed. In discovering a license conflict, the Free Software Foundation brought a lawsuit, the results of which forced Cisco to make the impacted source code public. While the actual cost was never disclosed publicly, experts suggest it was substantial.

And this brings us back to a key point — AI LLM tools are only as good as the dataset they’ve been trained on. While AI-generated coding tools can certainly assist developers, at this stage of their evolutionary process, they still require additional human oversight and software testing. For this use case, software composition analysis should be carried out on the AI-generated code to identify any potential license conflicts requiring attention.

One additional area that could very likely be the next major area of concern and conversation on this topic involves code snippets. In the event that an AI chatbot recommends a code snippet to implement a common function, the odds are good that it will become commonly used. Now, let’s say that a vulnerability is discovered in that snippet. It then becomes a systemic risk across many code bases and organisations. As such, at this stage of the AI game, code written by chatbots should be treated equally to code written by humans — it should be tested by a full suite of automated testing tools that work to help organisations identify and mitigate compliance, security, and operational risks stemming from the adoption of these AI-assisted tools.

And while it’s quite likely that new GPT versions will require less supervision over time, the question of trust remains nebulous as AI brings about an infinitely broader threat landscape — the new wild west. For this reason, it’s more critical now than ever before to maintain a proactive, defensive security strategy.

Phillip Ivancic is the Asia-Pacific head of solutions strategy at the Synopsys Software Integrity Group.

newsletter
cyber daily subscribe
Be the first to hear the latest developments in the cyber industry.