AI code security

How to ensure that AI-generated code is secure and doesn’t make your organization more vulnerable. Video: How AI-generated code impacts application layer security
Table of Contents

AI-generated code isn’t inherently more or less secure than human-written code, but the speed it enables can introduce unexpected risks.

As the amount of code generated by artificial intelligence (AI) increases, organizations need to have robust defenses in place to ensure that this code does not lead to breaches down the line. By embracing robust security at runtime and prioritizing effective defense in production, companies can safeguard their applications no matter who or what generated the code or how much their application layer grows. In today’s AI-first world, Contrast Application Detection and Response (ADR) provides the defense that organizations need to safeguard the application layer effectively.

What is AI-generated code?

AI-generated code refers to computer code produced by AI models, including large language models (LLMs) and other AI systems. These models have become increasingly adept at generating code.

AI coding tools like GitHub Copilot, Gemini Code Assist and Claude provide programming assistance. These tools are trained on vast datasets of software applications. Developers can provide instructions, like asking an AI co-pilot to create a specific type of application with certain features, and the AI will generate the code, handling tasks such as setting up databases or logging based on what it "knows" as the best or most common methods. Developers find these assistants useful for functions like quickly scaffolding projects or generating routine bits of code, as they can be great time savers.

AI for coding is becoming an increasingly common practice due to its ability to boost efficiency and productivity. Not only can AI enhance coding efficiency, but its rapid adoption is also accelerating code development and release velocity. 

How does AI code generation work in development?

Developers and software engineers interact with AI coding tools by providing instructions or prompts. For example, a developer might tell an AI co-pilot to create a specific type of application with certain features, like an e-commerce application with a shopping cart that performs particular functions.

The AI then generates the code based on these instructions and its training data. It can handle various tasks, including backend functions like setting up databases or logging. The AI draws from its training to produce code using what it "knows" as the best or most common methods.

Developers utilize these tools for programming assistance. This saves them time and allows them to focus on more interesting tasks. Using AI for coding is an increasingly common practice, boosting efficiency and productivity. The rapid adoption of LLMs also accelerates code development and release velocity.

What are the cybersecurity risks of AI-generated code?

There are three primary risks associated with AI-generated code:

  1. AI code is no less secure than human-written code. However, AI coding tools can introduce business logic vulnerabilities, which are not straightforward and require complete visibility of the entire codebase to find and remediate.
  2. The AI models themselves can be vulnerable to attack and manipulation. LLMs can theoretically be "poisoned" in a supply chain attack to introduce vulnerabilities. LLMs are also sometimes prone to "hallucinations" that can result in insecure patterns that can spread widely across organizations.
  3. Even when code developed by LLMs is more secure than human-written code, these tools increase an organization's overall attack surface by making it easier to write and ship code. The increased speed and volume of code delivery driven by AI, despite AI's excellent track record, could result in the same net number of vulnerabilities or even amplify them, because security struggles to keep pace. This accelerating velocity introduces significant Application Security (AppSec) challenges.

Challenges of securing AI code: How to secure AI-generated code

The use of AI-generated code in development, while offering significant productivity gains, presents several cybersecurity challenges that organizations must address.

As noted previously, some of the main challenges associated with securing AI code include generation of insecure code and vulnerabilities, increased output expanding the number of vulnerabilities and attack surface, and risks related to the LLMs themselves (i.e., if the LLM itself were compromised).

What is AppSec for AI code? What are the key aspects of AppSec for AI code?

Securing AI code requires adapting AppSec practices, particularly to address the unique challenges introduced by AI-generated code, including a dramatic increase in the amount of code shipped. AppSec teams are often heavily outnumbered by developers and their AI code assistants and already deal with thousands of vulnerabilities. The increased volume from AI code adds to this burden. 

Traditional security approaches, particularly pre-production scanning tools, are proving inadequate in this new landscape because they struggle to identify runtime-specific vulnerabilities, lack context for prioritization and cannot keep up with the speed of deployment. This leads to alert fatigue and excessive false positives that security operation centers (SOCs) struggle to manage. Traditional perimeter tools like web application firewalls (WAFs) and Endpoint Detection and Response (EDR) also have limitations and blindspots in the application layer, missing attacks that occur there.

The key aspects of AppSec for AI code include:

  • Addressing inherent code quality and vulnerabilities
  • Keeping pace with increased speed and volume
  • Enhancing developer oversight and training
  • Adapting security practices and tooling
  • Adopting runtime AppSec (i.e., Application Detection and Response [ADR])
  • Managing supply chain and model risks
  • Establishing policy and governance, including developing a shared responsibility model

Benefits of using Contrast ADR to reduce AI code risk

ADR is an emerging cybersecurity category uniquely positioned to address the challenges posed by AI-generated code. ADR tools focus on detecting and mitigating threats within the application layer, providing deep insight into the runtime behavior of applications. Key capabilities of ADR in this context include:

  • Providing deep runtime visibility into applications, regardless of whether the code was human or AI-generated.
  • Real-time detection and blocking of application attacks, including zero-day threats originating from AI-assisted development flaws. ADR analyzes and acts on application behavior at runtime, going beyond static signatures to detect live threats.
  • Context-driven prioritization is achieved by identifying vulnerabilities actively exploited in production environments. This reduces alert fatigue and allows teams to focus on real threats rather than theoretical risks.
  • Delivering actionable intelligence to SOC teams and providing guided runbooks for incident response.
  • Integrating with existing security ecosystems/solutions to provide continuous security and bridge gaps between siloed tools.

No matter how your organization develops software, Contrast ADR is ideally suited for protecting web applications in production environments. 

See Contrast ADR for yourself