Is Vibe Coding Safe? Security Risks and How to Fix Them
92% of developers use AI coding tools daily. Veracode's 2025 GenAI Code Security Report found AI models choose the insecure implementation 45% of the time. Both facts are true at the same time.
Vibe coding works. That's no longer debatable. The adoption curve has gone vertical: 92% of US developers use AI coding tools daily, 46% of all new code is AI-generated, and the market hit $4.7 billion in 2026.
But adoption and safety are different questions. The safety data is sobering.
This article lays out the real risks, walks through what's already gone wrong, and gives you specific checks that close the gap. If you're already vibe coding (and statistically, you probably are), this is worth your time.
New to the concept? Start with what vibe coding actually is, then come back here.
The real security risks
AI-generated code has a consistent, well-documented pattern: it optimizes for "works" and deprioritizes "secure." The AI is trained on millions of code repositories, and the most common patterns in those repos are not the most secure patterns.
Here are the vulnerability classes that appear most frequently in vibe-coded applications, based on research analyzing 50,000+ AI-generated codebases:
1. SQL injection (found in 31% of projects)
AI models frequently use string interpolation instead of parameterized queries. This is the oldest vulnerability in the book, and AI keeps writing it. An attacker can manipulate database queries to read, modify, or delete data they shouldn't touch.
2. Cross-site scripting / XSS (found in 27% of projects)
AI-generated code often renders user input directly into HTML without sanitization. Attackers can inject malicious scripts that execute in other users' browsers.
3. Broken authentication (found in 24% of projects)
Missing or incomplete auth checks on API endpoints. The AI builds a login page but forgets to protect the routes behind it. According to the OWASP Top 10, broken authentication has been a top-3 web vulnerability for over a decade.
4. Sensitive data exposure (found in 22% of projects)
API keys, database passwords, and secret tokens embedded directly in the code instead of stored in environment variables. Verbose error messages that leak database structure. Logging sensitive user data in plaintext. Once any of this reaches a public repo or a client-side bundle, it's exposed permanently.
5. Missing access controls
The AI builds CRUD operations but doesn't check whether the requesting user has permission to perform them. Any authenticated user can access any other user's data. This was the single most common vulnerability class in Tenzai's study of 5 major AI coding tools.
6. Insecure defaults
CORS set to allow all origins. Debug mode left on in production. File uploads accepting any type without validation. Configurations that work fine in development and are dangerous in production.
When vibe coding goes wrong: real incidents
The Moltbook breach
Moltbook was a social platform for AI agents that went viral in early 2026. The founder publicly stated he "didn't write a single line of code" for the platform. He described his vision to an AI assistant and deployed whatever it produced.
Security researchers at Wiz found a Supabase API key exposed in client-side JavaScript that granted full read and write access to the entire production database. No authentication required.
What was exposed:
- 1.5 million API authentication tokens (OpenAI, Anthropic, and other AI provider keys)
- 35,000 email addresses of human operators
- Private messages between agents, some containing third-party credentials
- 4.75 million database records with full read/write access
The root cause: Row Level Security was never enabled. The AI generated functional code that moved data in and out of Supabase, but it never configured the security policies controlling who can access that data. No rate limiting. No input validation. No access controls.
The founder didn't know what Row Level Security was, so he couldn't ask the AI to implement it. And the AI didn't implement it by default because the most common code patterns in its training data don't include it.
Tenzai's 69-vulnerability study
Security startup Tenzai conducted a systematic test: they had 5 major AI coding tools (Cursor, Claude Code, Codex, Replit, and Devin) each build 3 identical applications using the same prompts. Then they scanned all 15 apps for vulnerabilities.
Results:
- 69 total vulnerabilities across 15 applications
- Every single tool shipped vulnerable code
- Zero apps had CSRF protection
- Zero apps set security headers
- SSRF vulnerabilities appeared in every tool's output
- The most common failures: authorization logic, server-side request forgery, and missing security configurations
The tools performed well on "solved" vulnerability classes like SQL injection and XSS (where framework-level protections exist). They failed on vulnerabilities that require contextual understanding. "Should this user be able to access this resource?" is a question the AI can't answer without explicit instructions.
A note on conflicting data: The GetShipReady scan of 50,000 codebases found SQL injection in 31% of projects. Tenzai's controlled test of 5 agents building 15 apps found zero exploitable SQLi or XSS. These aren't contradictory — they're measuring different things. GetShipReady scanned codebases built with varying tools, frameworks, and skill levels at scale. Tenzai tested specific modern coding agents using current frameworks that have built-in protections (parameterized queries, auto-escaping frontend frameworks). The takeaway: modern coding agents handle the "solved" vulnerability classes well, but the broader ecosystem of AI-generated code still carries those risks, especially when older models or less capable tools are involved.
The broader numbers
From AppSec Santa's 2026 study testing 6 LLMs across 534 code samples:
- 25.1% overall vulnerability rate when tested against OWASP Top 10
- SSRF (CWE-918) was the most common finding with 32 confirmed vulnerabilities
- Injection-class weaknesses accounted for 33.1% of all findings
- The gap between the safest model (19.1%) and least safe (29.2%) was only 10 percentage points. No model is safe by default.
Why the risks exist
The security problem isn't a bug. It's structural.
AI optimizes for functionality, not security. The training objective is "generate code that does what the user described." Security requirements are implicit, not explicit. Unless you specifically prompt for auth, input validation, and access controls, the AI doesn't add them.
There's also no threat modeling happening. Human developers (ideally) think about attack vectors: What if someone sends malicious input? What if an unauthenticated user hits this endpoint? AI doesn't perform this analysis unless told to.
The speed of vibe coding makes it worse. When code appears in seconds, the temptation to ship without review is enormous. Karpathy himself described vibe coding as "fully giving in to the vibes" and forgetting that the code even exists. That mindset is incompatible with security.
And the AI doesn't know your authorization model. Which users should access which resources? What's the permission hierarchy? What's sensitive vs. public? These are business decisions, not code patterns. The AI has no way to infer them.
The 5 non-negotiable security checks
Every vibe-coded application needs these before touching production. No exceptions.
1. Authentication review
Verify that every endpoint, page, and data access point requires authentication. Check that:
- Login/signup flows use secure password hashing (bcrypt, argon2)
- Session tokens are HTTP-only, secure, and have reasonable expiration
- There's no way to access protected resources without a valid session
- Password reset flows don't leak information
2. Authorization / access control
Authentication proves who you are. Authorization proves what you can do. Check that:
- Users can only access their own data (not other users' records)
- Role-based permissions are enforced server-side, not just hidden in the UI
- Admin functions are actually restricted to admins
- API endpoints check permissions, not just authentication
This is where AI fails most often. The Tenzai study found authorization logic was the #1 vulnerability category across all tools.
3. Input validation and sanitization
Every input from a user is potentially malicious. Check that:
- Database queries use parameterized queries (no string interpolation)
- User-generated content is sanitized before rendering (XSS prevention)
- File uploads validate type, size, and content
- API inputs are validated against expected schemas
4. Secrets management
No credentials in code. Period. Check that:
- API keys, database passwords, and tokens are in environment variables
- No secrets appear in client-side JavaScript
.envfiles are in.gitignore- Different credentials are used for development and production
The Moltbook breach happened because a Supabase API key was exposed in client-side JS. This is the simplest check on the list and the most commonly skipped.
5. Security headers and configuration
Check that:
- CORS is configured to allow only your domain (not
*) - Security headers are set (Content-Security-Policy, X-Frame-Options, etc.)
- Debug mode and verbose error messages are off in production
- HTTPS is enforced everywhere
Tenzai found that zero out of 15 AI-built apps set proper security headers. Zero. This is low-hanging fruit.
How vibe coding platforms handle security
Not all platforms leave security up to the user. The best ones build guardrails into the generation process itself.
Vybe takes the infrastructure security decisions off your plate entirely. Apps run in sandboxed environments with managed PostgreSQL databases, built-in authentication, role-based access control, and audit trails. You don't have to remember to enable Row Level Security because the platform handles it. You don't have to configure CORS or security headers because the defaults are locked down.
This doesn't mean security review is optional. You still need to verify your application logic: does the right data show up for the right users? Are your business rules enforced? But the entire class of infrastructure-level vulnerabilities (exposed API keys, missing auth middleware, insecure defaults) is handled before your app reaches production.
For teams that want engineering oversight, Vybe offers Git sync so engineers can review AI-generated code, direct database access with SSH tunneling for security audits, and a full activity log showing every change made to every app.
The platform also connects to 3,000+ integrations through managed, authenticated connections — meaning your Salesforce credentials, Stripe keys, and database passwords never appear in application code.
The bottom line
Vibe coding is not inherently safe or unsafe. It's a tool. The safety comes from the process around it.
If you vibe code without reviewing the output, without checking auth and access controls, without managing secrets properly — you will ship vulnerabilities. The data on this is unambiguous.
If you vibe code on a platform with built-in security guardrails, review the five checks above before shipping, and treat AI-generated code with the same scrutiny you'd give a junior developer's first pull request — you can ship fast and ship secure.
The risk isn't vibe coding itself. The risk is vibe coding without verification.
For more on writing effective prompts that produce better (and more secure) code from the start, see our 30 vibe coding prompts that actually work. And for a step-by-step walkthrough of the full vibe coding process, start with how to vibe code.
Ready to vibe code with guardrails? Try Vybe free — enterprise security built in, no configuration required.

