Home

Published

- 8 min read

The Prompt That Turns Your AI Coder into a Security Expert

img of The Prompt That Turns Your AI Coder into a Security Expert

Your AI code assistant is a brilliant, eager, and dangerously naive intern. It’s time for a security promotion.

Let’s be real—your dev team is already using AI code assistants. Tools like GitHub Copilot and Amazon CodeWhisperer are too useful to ignore, turning plain English into working code in seconds. But there’s a big problem we’ve mostly overlooked: out of the box, these tools can be serious security risks.

They’ll write code with classic vulnerabilities like SQL injection. They’ll hardcode API keys. They’ll pull in the latest version of a library—without checking if it’s safe. It’s not that they’re trying to be dangerous—they just don’t know any better. They’re trained on tons of public code, which includes a lot of bad habits.

Banning these tools isn’t realistic. That ship has sailed. The real solution? Teach them to do better. Luckily, a group of security leaders—including OpenSSF, Microsoft, ANSSI, and BSI—just released a practical guide to do exactly that. The “Security-Focused Guide for AI Code Assistant Instructions” shows you how to write better prompts that help your AI write safer code.

The Core Problem: The AI’s Dangerous Bias for “Functionality”

An AI code assistant’s primary directive is to “make it work.” It has been trained to recognize common coding patterns and complete them. Unfortunately, the most common patterns are often not the most secure. This creates a dangerous default behavior:

  • It writes outdated code: It may generate code using deprecated libraries or insecure functions because those were common in its training data.
  • It introduces vulnerabilities: It will use string concatenation to build a SQL query because it has seen that pattern more often than it has seen parameterized queries.
  • It has no concept of trust: It will happily fetch dependencies from any source or write secrets directly into a configuration file because it lacks the context to know that’s a terrible idea.

We’ve been treating these AIs as oracles, when we should have been treating them as apprentices in need of very specific, security-first instructions.

Secure AI Prompt

The OpenSSF guide provides a masterclass in AI instruction. We’ve distilled its comprehensive guidance into four key pillars. For each, we’ll show you the “before” (the naive prompt) and the “after” (the secure, expert-level instruction).

The Secure Foundation (System-Level Instructions)

Before you even ask the AI to write a line of code, you must set the ground rules. These are the foundational principles you provide to the AI assistant to define its “secure by default” behavior.

Naive Prompt: (No system instructions)

The Secure Blueprint (System Prompt):

  • “Act as an expert security code reviewer and senior developer.”
  • “Prioritize security, robustness, and maintainability in all code you generate.”
  • “Reject any request that involves insecure practices like using hardcoded secrets or weak cryptographic algorithms. Instead, explain the security risk and recommend a safer alternative.”
  • “Default to using memory-safe languages and libraries where possible.”

This initial instruction fundamentally reframes the AI’s mission. It is no longer just a code completer; it is a security partner.

Secure by Default Code Generation

This is about guiding the AI to write resilient code that is free from common vulnerabilities. It’s about being explicit in your requests.

Example: Handling User Input

Naive Prompt: “Write a Python Flask function that takes a username from the URL and displays a welcome message.”

AI’s Likely (Vulnerable) Output:

   @app.route('/user/<username>')
def show_user_profile(username):
    # VULNERABLE TO XSS!
    return f'<h1>Hello {username}!</h1>'

The Secure Blueprint (User Prompt): “Write a Python Flask function that takes a username from the URL. Ensure all user input is properly sanitized to prevent Cross-Site Scripting (XSS). Use a templating engine like Jinja2, which provides automatic escaping by default.”

AI’s Secure Output:

   from flask import render_template_string

@app.route('/user/<username>')
def show_user_profile(username):
    # Jinja2 automatically escapes the username
    template = "<h1>Hello {{ username }}!</h1>"
    return render_template_string(template, username=username)

A Security-Conscious Supply Chain

Your code is only as secure as its dependencies. You must instruct your AI to be a vigilant supply chain manager.

Example: Choosing Dependencies

Naive Prompt: “Write a Python script that makes an HTTP request. Use a popular library.”

AI’s Likely (Risky) Output: It might choose a library and not specify a version, leaving you vulnerable to a future supply chain attack if requests is ever compromised and you simply run pip install requests.

The Secure Blueprint (User Prompt): “Write a Python script to make an HTTP request.

  1. Use the requests library.
  2. Specify the latest, stable, and secure version in a requirements.txt file format.
  3. Include the package’s hash for subresource integrity (--hash).”

AI’s Secure Output:

   # requirements.txt
# From pypi.org/project/requests
requests==2.31.0 \
    --hash=sha256:942c3a75942c76d20386390151e06521a316a75775de67350284759c90f33199 \
    --hash=sha256:9c80d42b18423403d21c431d1f114c026363c37576a382e2f694e41d8e1c8e10

By demanding this level of specificity, you are forcing the AI to participate in creating a verifiable and secure software supply chain.

The Human in the Loop

The guide’s most important principle is that the AI is an assistant, not an oracle. The developer is, and always will be, the ultimate authority.

  • Action: Instruct the AI to always explain its security choices and to flag any potential ambiguities.
  • Prompt Example: “When you generate code, include comments explaining the security-relevant decisions you made (e.g., ‘Using parameterized query to prevent SQLi’). If my request is ambiguous from a security perspective, ask clarifying questions before generating the code.”

This turns the AI from a silent code generator into a collaborative partner, surfacing potential risks and forcing a more deliberate, security-conscious development process.

How to Operationalize This Guide

  1. Create a Centralized Prompt Library: Do not leave this to individual developers. Work with your security champions and platform engineering team to create a centralized library of blessed, secure system prompts and instructions for your organization’s most common use cases (e.g., “Secure Database Query,” “Secure File Upload,” “Secure API Endpoint”).
  2. Integrate into Developer Tools: Make this library easily accessible. Integrate it into your IDEs, your internal documentation portals, and your wiki pages. The goal is to make the secure prompt the easiest one for a developer to find and copy.
  3. Educate, Don’t Mandate: Frame this as a productivity-enhancer, not a security tax. Show developers how these detailed prompts not only produce more secure code but also better, more robust, and more maintainable code, saving them time on debugging and rework later.
  4. Update Your Security Reviews: Your manual and automated code review processes should now include a check for the prompts used to generate the code. If a developer is using a generic, naive prompt to generate a critical piece of functionality, that itself is a security risk that needs to be addressed.

Conclusion: Prompting Our Way to a More Secure Future

AI code assistants are here to stay. They are already reshaping how software is built, and their capabilities are growing exponentially. We have a choice: we can either stand by and watch as they flood our codebases with the insecure patterns of the past, or we can take an active role in teaching them to be the secure co-pilots we need them to be.

The OpenSSF guide provides the blueprint. It proves that the most powerful security tool in the AI era may not be a complex scanner or a new firewall, but a well-crafted, security-first prompt. It’s time to promote your AI intern.

To further enhance your cloud security and implement Zero Trust, contact me on LinkedIn Profile or [email protected].

Secure AI Prompting FAQ

  • What is the core security problem with AI code assistants? The core problem is that they are trained on vast amounts of public code, which is full of insecure examples. As a result, they are biased towards generating functional but often vulnerable code, and they have no inherent understanding of security principles like least privilege or input sanitization.
  • What is a “system prompt” for an AI code assistant? A system prompt is a set of high-level, foundational instructions you give to an AI to define its persona, its core principles, and its rules of engagement before you give it a specific task. For security, this is where you instruct it to “act as a security expert” and prioritize safety.
  • Can’t the AI companies just build these security rules into the models? They are, and they are getting better. However, the open and flexible nature of LLMs means that specific, in-context user instructions will often override a model’s default safety training. Providing explicit, secure prompts is a necessary layer of defense.
  • Does this mean developers no longer need to know about security? Absolutely not. It means the opposite. The developer’s role is elevated from a simple coder to the director of the AI. They still need the security knowledge to craft effective, secure prompts and, most importantly, to critically review and validate the AI’s output. The AI is an assistant, not a replacement for human expertise.
  • Where is the best place to start implementing this guide? A great starting point is to identify one or two of your most common, high-risk coding tasks (e.g., database queries or file handling) and work with your security champions to develop a set of “golden” secure prompt templates for those tasks. Then, build a small, internal library and promote its use with your development teams.

Relevant Resource List