AI-generated code isn’t inherently more or less secure than human-written code, but the speed it enables can introduce unexpected risks.
As the amount of code generated by artificial intelligence (AI) increases, organizations need to have robust defenses in place to ensure that this code does not lead to breaches down the line. By embracing robust security at runtime and prioritizing effective defense in production, companies can safeguard their applications no matter who or what generated the code or how much their application layer grows. In today’s AI-first world, Contrast Application Detection and Response (ADR) provides the defense that organizations need to safeguard the application layer effectively.
AI-generated code refers to computer code produced by AI models, including large language models (LLMs) and other AI systems. These models have become increasingly adept at generating code.
AI coding tools like GitHub Copilot, Gemini Code Assist and Claude provide programming assistance. These tools are trained on vast datasets of software applications. Developers can provide instructions, like asking an AI co-pilot to create a specific type of application with certain features, and the AI will generate the code, handling tasks such as setting up databases or logging based on what it "knows" as the best or most common methods. Developers find these assistants useful for functions like quickly scaffolding projects or generating routine bits of code, as they can be great time savers.
AI for coding is becoming an increasingly common practice due to its ability to boost efficiency and productivity. Not only can AI enhance coding efficiency, but its rapid adoption is also accelerating code development and release velocity.
Developers and software engineers interact with AI coding tools by providing instructions or prompts. For example, a developer might tell an AI co-pilot to create a specific type of application with certain features, like an e-commerce application with a shopping cart that performs particular functions.
The AI then generates the code based on these instructions and its training data. It can handle various tasks, including backend functions like setting up databases or logging. The AI draws from its training to produce code using what it "knows" as the best or most common methods.
Developers utilize these tools for programming assistance. This saves them time and allows them to focus on more interesting tasks. Using AI for coding is an increasingly common practice, boosting efficiency and productivity. The rapid adoption of LLMs also accelerates code development and release velocity.
There are three primary risks associated with AI-generated code:
The use of AI-generated code in development, while offering significant productivity gains, presents several cybersecurity challenges that organizations must address.
As noted previously, some of the main challenges associated with securing AI code include generation of insecure code and vulnerabilities, increased output expanding the number of vulnerabilities and attack surface, and risks related to the LLMs themselves (i.e., if the LLM itself were compromised).
Securing AI code requires adapting AppSec practices, particularly to address the unique challenges introduced by AI-generated code, including a dramatic increase in the amount of code shipped. AppSec teams are often heavily outnumbered by developers and their AI code assistants and already deal with thousands of vulnerabilities. The increased volume from AI code adds to this burden.
Traditional security approaches, particularly pre-production scanning tools, are proving inadequate in this new landscape because they struggle to identify runtime-specific vulnerabilities, lack context for prioritization and cannot keep up with the speed of deployment. This leads to alert fatigue and excessive false positives that security operation centers (SOCs) struggle to manage. Traditional perimeter tools like web application firewalls (WAFs) and Endpoint Detection and Response (EDR) also have limitations and blindspots in the application layer, missing attacks that occur there.
The key aspects of AppSec for AI code include:
ADR is an emerging cybersecurity category uniquely positioned to address the challenges posed by AI-generated code. ADR tools focus on detecting and mitigating threats within the application layer, providing deep insight into the runtime behavior of applications. Key capabilities of ADR in this context include:
No matter how your organization develops software, Contrast ADR is ideally suited for protecting web applications in production environments.