Artificial intelligence can be a double-edged sword for programmers, according to a study by researchers at Stanford University. The study found that programmers who use AI systems to develop code are more likely to create applications vulnerable to security breaches.
The researchers found that programmers using the Codex system were more likely to offer incorrect and unsafe solutions than a control group. In fact, these programmers considered their solutions more advanced in terms of cybersecurity than those of the control group.
This isn’t the first study to reach this conclusion. A similar study conducted at New York University found that about 40% of applications created using an AI assistant called GitHub Copilot contained vulnerabilities.
Why are AI-assisted programmers more likely to create vulnerable code?
One possibility is that they may rely too heavily on the AI system, leading to a lack of thoroughness in their own code review process. Another possibility is that the AI system itself may not be fully reliable or may not always generate the most secure code.
To address these issues, Stanford researchers suggest creating mechanisms for analyzing AI-generated code and strengthening the default security settings in cryptographic libraries. They also recommend caution when using AI code generation systems, especially for those with less experience.
It’s important to note that AI can still be a valuable tool for programmers, but it’s crucial to use it responsibly and not rely on it blindly. Although recently, DeepMind AI, AlphaCode leaves better results than half of the programmers in a coding contest, so AI will likely become even more integral to the software development process as it advances. However, it’s important to remember that AI is still just a tool, and ultimately, the responsibility for the security and reliability of the code falls on the programmers themselves.