Now that I might have made you a little more capable with using AI... I need to point out one common negative I'm seeing that I'm sure isn't unique to my company.
Just because you can do something with AI doesn't mean you should. What I mean by that: I see people who are completely ignorant to basic security best practices. Do not assume AI will build you a secure website or put access controls in place that will prevent the outside world from accessing your company's internal data.
Don't assume AI knows what third party plugins are safe, or that it shouldn't download random packages off the web and instead should use your internal repos.
I can almost guarantee that at any large scale company in the near future you will hear about someone using AI to vibe code some solution that had zero security and ends up creating a massive data breach as a result.
AI is already able to take any random person with an idea and with enough persistence, build them an app or a website or some random tool. Every time I've seen a vibe coded solution, it has had massive security gaps.
Here's the real problem: the security tools we have today to scan, identify, and flag insecure configurations are being completely outpaced by AI's ability to turn any random person into a developer. We're generating insecure code faster than our security tooling can catch it, and that gap is only getting wider.
If you don't understand the basics of what makes something secure in your domain, AI isn't going to magically fix that for you. It will happily generate code that works but is completely insecure. You still need to know enough to ask the right questions and validate what it's giving you.
Just because you can do something with AI doesn't mean you should. What I mean by that: I see people who are completely ignorant to basic security best practices. Do not assume AI will build you a secure website or put access controls in place that will prevent the outside world from accessing your company's internal data.
Don't assume AI knows what third party plugins are safe, or that it shouldn't download random packages off the web and instead should use your internal repos.
I can almost guarantee that at any large scale company in the near future you will hear about someone using AI to vibe code some solution that had zero security and ends up creating a massive data breach as a result.
AI is already able to take any random person with an idea and with enough persistence, build them an app or a website or some random tool. Every time I've seen a vibe coded solution, it has had massive security gaps.
Here's the real problem: the security tools we have today to scan, identify, and flag insecure configurations are being completely outpaced by AI's ability to turn any random person into a developer. We're generating insecure code faster than our security tooling can catch it, and that gap is only getting wider.
If you don't understand the basics of what makes something secure in your domain, AI isn't going to magically fix that for you. It will happily generate code that works but is completely insecure. You still need to know enough to ask the right questions and validate what it's giving you.