Writing software has never been faster. The question is: at what cost?

Today, a developer can generate hundreds of lines of working code in minutes. Tools like GitHub Copilot, Cursor, and Claude have made Vibe Coding (AI-assisted development through natural language) a routine practice for teams worldwide.

This shift is genuinely transformative. Prototypes that once took days now take hours. Repetitive tasks disappear. The barrier to entry for building software has dropped dramatically.

But there’s a problem most organizations haven’t solved yet: AI-generated code is not secure by default.

The Code Works. The Problem Does Too.

This is the core paradox of Vibe Coding: AI produces code that runs, passes tests, and looks right. The catch is that “working” and “secure” are two different things.

Recent studies show that a significant share of automatically generated code contains classic vulnerabilities, SQL injection, broken authentication, compromised dependencies, and inadvertent credential exposure. Not because the AI is incompetent, but because it has no awareness of risk context, system architecture or regulatory obligations.

In practice, AI is an external code supplier. And it should be treated as one.

What’s at Stake in 2025

For organizations covered by NIS2, and there are many, across sectors like banking, healthcare, critical infrastructure, and digital service providers, Vibe Coding without governance isn’t just a technical risk. It’s a compliance risk.

The directive is clear on its principles: Secure-by-Design, proportional risk management, and auditable evidence of controls. None of these requirements are compatible with a “the code let’s ship it” culture, as reinforced by EU guidance and ENISA implementation frameworks.

9 Requirements for Using AI in Development Safely

This isn’t about banning Vibe Coding. It’s about putting a framework around it.

  1. Treat AI-generated code as untrusted code – Like any external dependency, AI-generated code should enter the development cycle subject to controls. Final responsibility always rests with a human.
  2. Formal governance and usage policy – Who can use AI? In what contexts? With what restrictions? Without documented answers to these questions, the risk isn’t technical – it’s organizational.
  3. Protection of sensitive information in prompts – Secrets, tokens, personal data, and confidential code should never be sent in prompts to external models. The risk isn’t in training – it’s inference leakage.
  4. Secure prompting as a systematic practice – If you don’t explicitly ask for security, the AI won’t deliver it. Input validation, authentication, secure defaults – all of this starts in the prompt, before the code even exists.
  5. Mandatory human review – No AI-generated code goes to production without review. In an audit, the absence of this evidence is indefensible for SSDLC accountability.
  6. Automated AppSec in the pipeline – SAST, dependency analysis, secret scanning, security gates in CI/CD. The speed of AI is only sustainable when matched by security automation.
  7. Validation of AI-suggested dependencies – AI suggests packages that may not exist, or that were created specifically to deceive (slopsquatting). Every new dependency requires explicit validation.
  8. Proportionality by use case – Documentation and tests have a different risk profile than authentication and cryptography. Controls should be proportional to risk-based, not uniform.
  9. Traceability and auditable evidence – What can’t be demonstrated doesn’t exist – especially during regulatory inspections or after an incident.

Vibe Coding isn’t the problem. The informality with which it’s being adopted is. AI will continue to accelerate software development. But acceleration without governance isn’t efficiency – it’s the accumulation of technical and regulatory debt at the same time.

The organizations that pull ahead won’t be the ones using the most AI. They’ll be the ones that know how to use it in a controlled, auditable, and secure way.

That’s the next level of Vibe Coding.

Written by Sérgio Sá.