The illusion of trust in AI-generated code

The double-edged sword of coding with generative AI

· TechRadar

News By Yossi Pik published 10 October 2024

(Image credit: Shutterstock / IrenaR)

The adoption of GPT-4 and other generative AI (GenAI) models in the software development community has been swift. They offer astounding benefits, but their allure can distract developers from the reality that this technology is not infallible. If a blind eye is turned to due diligence, AI-generated code from innocent developer prompts can inadvertently introduce security vulnerabilities in your code. For that reason, it’s crucial to highlight the GenAI’s limitations as coding tools, why they create a false sense of trust, and the dangers that result when due diligence is not performed on AI-generated code.

Yossi Pik

Co-founder and CTO of Backslash Security.

The double-edged sword of coding with generative AI

Generative AI can significantly accelerate code development and has the potential to offer developers unprecedented efficiency and capability -- however, it also introduces significant security risks.

To understand how inadvertent security vulnerabilities may find their way into a developer's code, we need to cover typical GenAI use cases in software development. For day-to-day tasks, developers query GenAI models to identify code libraries and receive open-source software (OSS) package recommendations to help solve coding challenges.

For such queries, whether for Java, Python, or JavaScript/TypeScript, a common thread emerged: GenAI query results are inconsistent. This inconsistency produces a false sense of security because, sooner or later, a varied result chosen by a developer will contain an instance of insecure code.

Further adding to this risk, recently published Stanford University research concluded that a developer’s prolonged use of GenAI may gradually affect their drive to stop validating the code thoroughly, not realizing how often recommendations could contain embedded risks. This misplaced trust can lead to the integration of insecure code snippets, ultimately compromising the application's overall security.

How generative AI can introduce code vulnerabilities

Warning signs of potentially insecure code present in AI-generated recommendations for developers come in several forms, though the most common signs are:

Outdated OSS Packages: Due diligence on suspicious OSS packages recommended by GPT-4 often reveals that they are outdated, suggesting those package versions had known vulnerabilities. Static datasets used to train LLMs are often the culprit in these cases.

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors