13
Sep
GitHub Copilot and other AI coding tools have transformed how we write code and promise a leap in developer productivity. But they also introduce new security risks. If your codebase has existing security issues, AI-generated code can replicate and amplify these vulnerabilities. Research from Stanford University has shown that developers using AI coding tools write significantly less secure code, and this logically increases the likelihood of such developers producing insecure applications. In this article, I’ll share the perspective of a security-minded software developer and examine how AI-generated code, like that from large language models (LLMs), can lead to security flaws.…