> But now, advanced models like ChatGPT o3 or Gemini 2.5 Pro are very good at finding vulnerabilities in code, even though they don’t advertise themselves as “security tools.” Without even trying, they’ve become better at security than many tools made by security companies.
Yeah, but its also CREATING far more vulnerable code at a far faster pace than we did before, and my experience with AI-assisted coding is that they largely fail at finding their own bugs and mistakes.
Also, in terms of testing, my experience with AI's testing code is that they write a lot of easy superficial tests and when they fail, they tend to patch over it (ie change the tests until they pass, rather than actually fixing the bugs causing the failures -- or worse: mark the tests as "skip").
So I don't buy it. I think that AI is causing (and will continue to for at least the next few years) more vulnerable code, not less, and it does it at a faster pace than we were doing before.
> Less Vulnerable Code
> But now, advanced models like ChatGPT o3 or Gemini 2.5 Pro are very good at finding vulnerabilities in code, even though they don’t advertise themselves as “security tools.” Without even trying, they’ve become better at security than many tools made by security companies.
Yeah, but its also CREATING far more vulnerable code at a far faster pace than we did before, and my experience with AI-assisted coding is that they largely fail at finding their own bugs and mistakes.
Also, in terms of testing, my experience with AI's testing code is that they write a lot of easy superficial tests and when they fail, they tend to patch over it (ie change the tests until they pass, rather than actually fixing the bugs causing the failures -- or worse: mark the tests as "skip").
So I don't buy it. I think that AI is causing (and will continue to for at least the next few years) more vulnerable code, not less, and it does it at a faster pace than we were doing before.