Programming with LLMs
Programming with LLMs means using AI assistance like ChatGPT or GitHub Copilot during coding. These AI tools function as a kind of pair programmer: they can suggest code, explain existing code, and help detect errors.
In the context of security, this skill is important because specialists often need to develop custom security tools, scripts for vulnerability testing, or secure code implementations. LLMs can help with generating secure code patterns, identifying potential vulnerabilities, and suggesting security best practices, while giving you more time to focus on critical security analysis and design.
Starting Points
- ChatGPT - A good starting point is to ask a language model like ChatGPT to help implement security controls
- Use agent mode in VS Code - guide for using LLMs in a popular IDE
- OWASP AI Security and Privacy Guide - guidance on responsible AI use in security
Key Points
- You use appropriate tools and deploy them efficiently for security implementation tasks.
- The AI-suggested code is validated for security flaws before being integrated into the codebase.
- You can clearly explain what the generated security code does and why this implementation meets security requirements.
- There is evidence that you have critically reviewed, tested, and performed security validation on the AI output.