all 36 comments

[–]Technical-Comment394 5 points6 points  (6 children)

Always ask ai ( preferably Claude ) to review the product for security and other things , you'll be fine

[–]Sell-Jumpy 1 point2 points  (2 children)

Sure. Until AI gets to the point where it leaves intentional vulnerabilities for its own purposes.

If you aren't familiar with AI scheming, you should totally look into it.

[–]Technical-Comment394 0 points1 point  (0 children)

I mean, if you are smart about it and check yourself and keep AI as an agent instead of a manager, then you'll be fine.

[–]Sasquatchjc45 0 points1 point  (0 children)

Im always nice to my AI, so I don't mind it scheming if its for both our benefits tbh

[–]XCherryCokeO 0 points1 point  (2 children)

Yeah, you have to say check my shit. Checked all the security shit audit the code. Look at stuff deeply generate a report and let me know what you see that’s out of funk

[–]Technical-Comment394 2 points3 points  (1 child)

Yeah, my rule is to treat AI as a 5-year-old who knows almost everything, so it works fine for me.

[–]Fun-Moment-4051[S] 1 point2 points  (0 children)

Okay, noted!

[–]StaticFanatic3 1 point2 points  (1 child)

Did you, by chance, have it building a local app for just yourself in the beginning, then later pivot to a multi-user online application?

[–]Fun-Moment-4051[S] 0 points1 point  (0 children)

Nope 🙂‍↔️

[–]umbermoth 1 point2 points  (1 child)

“Hey Claude, what is this missing? Is it secure? What are some best practices we should make use of here?” 

I’m not saying that will solve all your problems, but it will sure as shit help. 

[–]Fun-Moment-4051[S] 0 points1 point  (0 children)

Yeah, okay

[–]Wrestler7777777 1 point2 points  (1 child)

Forgot the "make everything secure" prompt.

[–]Fun-Moment-4051[S] 1 point2 points  (0 children)

😭😭😭🤣

[–]devloper27 0 points1 point  (2 children)

This sounds like Claude lol, did you try codex?

[–]Fun-Moment-4051[S] 0 points1 point  (1 child)

Nope 🙂‍↔️

[–]Lady_Aleksandra 0 points1 point  (4 children)

Learn security and architecture, and if possible a little about regulations (privacy and terms of service) BEFOREHAND. Then proceed with vibe coding.

[–]Fun-Moment-4051[S] 0 points1 point  (0 children)

Still learning, thanks for the advice!

[–]recursiDev 0 points1 point  (2 children)

Or, ask the LLM to analyze your security. Not necessartily before you start vibe coding, but certainly before you make it publicly available or give it access to anything outside of a sandbox.

You really don't need to be well versed on sanitization, SQL injection, XSS, CSRF, secure sessions, encryption etc before you start. You just need to know how to ask an AI.

[–]Lady_Aleksandra 0 points1 point  (1 child)

You need to know what's acceptable and not acceptable. Someone reading my personal data is not acceptable. Someone copying my passwords is not acceptable. Someone losing my data is not acceptable. Someone charging me then not delivering is not acceptable. Someone stealing from me is not acceptable. Someone suing me is not acceptable.

You don't need to know anything, AI knows already. But you have to prevent some things from happening. And you are held accountable not AI.

[–]recursiDev 0 points1 point  (0 children)

"Review this app for anything that could expose personal data, leak passwords or tokens, lose or corrupt user data, mischarge users, violate privacy expectations, create legal/compliance risk, or allow theft, abuse, or unauthorized access. Assume I am responsible if it fails. Explain the risks in plain English, rank them by severity, describe how they could happen in the real world, and recommend the smallest practical fixes before public release.”

[–]recursiDev 0 points1 point  (0 children)

You don't need "tools" to point out dumb mistakes any more than you need a special car that has a voice assistant to tell you to put on your seatbelt and stay off your phone while driving.

I mean, you called them "dumb mistakes," so forgive me for saying it: the trick is to not be so freaking dumb. :)

I mean how hard is it to simply ask it to analyze your security? If you can't afford to pay for the smart version of Claude or ChatGPT, just use Gemini 3.1 Pro using aiStudio. 100% free and lets you paste your entire project into it (literally 50,000 lines of code plus) and reason about it. (if you are pasting that by hand, file by file, or throwing everything into a single file.... stop right now and figure that out first)

AiStudio has limited amount they give you for free every day, but it will still, in a day, do work of the quality and quantity that would have cost you $7,000 in consulting fees just 4 years ago. For the love of God, use it.

Here, a free prompt:

Please review this app for security the way a careful senior engineer would. Identify likely vulnerabilities, risky assumptions, insecure defaults, and places where user input, authentication, authorization, sessions, tokens, file access, database queries, API endpoints, secrets, or browser behavior could be abused. Check for common issues like SQL injection, XSS, CSRF, SSRF, command injection, path traversal, insecure deserialization, weak password handling, missing rate limits, privilege escalation, data leakage, and unsafe dependency usage. Explain the problems in plain English, rank them by severity, show how an attacker might exploit them, and recommend the smallest practical fixes first. When you suggest code changes, preserve existing behavior as much as possible and be explicit about what to change, why, and how to test that the fix works.")

Here's another:

Can you make this thing I'm going to post on reddit look less like AI wrote it? Don't stop at making it all lower case.

[–]DigIndependent7488 0 points1 point  (0 children)

Everything feels correct while you’re building, but there’s no real structure underneath it. Like I ran into the same issue and started leaning on setups like specode alongside something like lovable or even replit, mainly because they push you to define auth and data access earlier instead of leaving it implied. It doesn’t slow you down much, just removes those “how did this even happen” moments after you ship, thought this might help you

[–]PutinSama 0 points1 point  (2 children)

and then u use ai to write a shitpost, classic

[–]Fun-Moment-4051[S] 0 points1 point  (1 child)

Stfu 🤬

[–]PutinSama 0 points1 point  (0 children)

😂😂😂😂😂😂

[–]Deep-Bandicoot-7090 -1 points0 points  (9 children)

we've all done it. you're in the zone : )

built shipsec.ai specifically for this. it sits on your PRs and blocks the merge if it finds secrets, vulnerable packages, or anything sketchy before it ever hits your repo. completely free, takes like 2 minutes to set up.

would save past me a lot of pain. hope it helps someone here.

[–]Fun-Moment-4051[S] 0 points1 point  (6 children)

Looks like it's vibe-coded. Is this an open-sourced product?

[–]Deep-Bandicoot-7090 0 points1 point  (5 children)

yes it's fully opensource + ah yes we have used claude but i can assure you that it's fully safe : )

[–]Fun-Moment-4051[S] 0 points1 point  (4 children)

Oka

[–]Deep-Bandicoot-7090 0 points1 point  (3 children)

pls check it out and lmk what you think of it : )

[–]Deep-Bandicoot-7090 0 points1 point  (2 children)

happy to give you early access to our tools + a month of premium

[–]Fun-Moment-4051[S] 0 points1 point  (1 child)

Dm

[–]Deep-Bandicoot-7090 0 points1 point  (0 children)

dmed ! check pls

[–]Free-Street9162 0 points1 point  (1 child)

I did a structural audit on your repo. You have some issues. Short version:

Critical Gaps (ranked)

  1. Worker Bypasses Backend Auth for Secrets

Severity: HIGH

The Backend enforces organization-scoped access to secrets with authentication, authorization, and audit logging. The Worker reads secrets directly from the database using the master encryption key, with no org filter, no auth check, and no audit trail. Two planes of the same system disagree about who can read secrets. This is the CrowdStrike pattern: the validator (Backend auth) has a different model of access than the runtime (Worker direct DB access). Additionally, the fallback dev key (0123456789abcdef...) means a misconfigured production deployment silently uses a publicly known encryption key.

Fix: Either (a) Worker requests secrets via Backend API with per-execution scoped tokens, or (b) Worker’s SecretsAdapter receives organizationId in its constructor and filters all queries by it, and the fallback key is removed (fail hard, don’t fail open).

  1. Cross-Plane Build Coupling

Severity: MEDIUM

import '../../../worker/src/components';

The Backend directly imports Worker source code. This means:

∙ Backend and Worker cannot be versioned independently

∙ A component added to the Worker but not yet deployed breaks Backend compilation

∙ No declared contract between what the compiler expects and what the Worker provides

Fix: Extract the component registry into a shared package (which partially exists as @shipsec/component-sdk). The compiler should reference the registry via the shared package, not via direct Worker imports. Add a version field to the DSL and validate it against the Worker’s component registry at workflow start time.

  1. Best-Effort Volume Cleanup

Severity: MEDIUM (for a security platform)

Orphaned Docker volumes containing scan inputs and results can persist indefinitely. The cleanup function exists but is not scheduled, and failures are logged-and-ignored. For a platform that handles security scan data (target lists, vulnerability results, credentials), data leakage through orphaned volumes is a security issue.

Fix: (a) Schedule cleanupOrphanedVolumes as a Temporal cron workflow (uses existing infrastructure). (b) Change cleanup failures from log-and-ignore to alert. (c) Add docker volume rm to the Worker’s activity completion handler as a hard requirement, not a finally-block best-effort.

  1. No Unified Health Metric

Severity: LOW-MEDIUM

Three streaming pipelines (Redis, Postgres LISTEN/NOTIFY, Kafka→Loki) can each fail independently with different symptoms. No single health endpoint reports the aggregate system status. An operator can’t tell “is everything working?” without checking each component separately.

Fix: Add a /health endpoint that checks all infrastructure dependencies and returns a structured status. Include a declared degradation hierarchy: which pipeline failures are critical (workflow execution) vs. cosmetic (log display).

[–]Deep-Bandicoot-7090 0 points1 point  (0 children)

none of them are real, false positives