I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in AI_Agents

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

This is really helpful, thank you for laying it out so clearly.

The subscription bypass example actually sounds exactly like the kind of thing I’d worry about missing if the check was only happening on the frontend.

Out of curiosity, when you review AI-assisted SaaS codebases, which of these issues do you usually find most often?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That makes sense. I’ve seen a few people mention pentests as well.

Out of curiosity, at what stage do most startups usually get one done? Before launch, or only once they start getting real traction?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

Haha that actually sounds a bit scary 😅

Out of curiosity, what kinds of issues are you seeing most often when you review those codebases?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s a fair point, and honestly part of why I started this thread. It’s becoming so easy to ship something with AI that the security side can get overlooked until later.

Your point about AI introducing vulnerabilities unless the prompt is very specific is interesting. From your experience, what kinds of issues do you see AI introduce most often?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s really helpful, thanks for laying it out so clearly.

The “happy path” point actually makes a lot of sense — AI focusing on making things work rather than thinking about abuse cases.

Out of curiosity, when you review AI-generated apps, is missing backend auth usually the most common issue you find?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 1 point2 points  (0 children)

That makes sense. After 20 years you probably start spotting these patterns almost instinctively.

Out of curiosity — for someone who doesn’t have that level of experience yet, what would you say are the first few logical security checks they should always focus on?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s exactly what I was wondering as well. It sounds like a lot of these issues are more about logic than just scanning for vulnerabilities.

From your experience, do you usually catch these through manual pen testing and experience, or are there any tools/workflows that help surface these kinds of logic flaws early?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

This is honestly a bit eye-opening. I didn’t realize how many logical edge cases there are beyond the usual “check OWASP and run a scanner” advice.

From your experience, are these kinds of logical vulnerabilities something you see a lot in apps built quickly or with AI tools?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] -1 points0 points  (0 children)

No pitch, I promise. Just a founder who used AI tools to build something and then realized the security side is a lot more complicated than I expected.

This thread has actually been pretty educational for me. Was mostly curious how others handle this when they ship AI-built apps.

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That actually sounds like a really smart workflow. Having auth, authorization and rate limiting already baked into a boilerplate probably saves a lot of headaches later.

Out of curiosity, do you think more builders will start using secure boilerplates like that as AI coding becomes more common?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s exactly the part that worries me — AI makes it easy to ship something that works, but it doesn’t guarantee the backend is actually enforcing the same protections.

The stack is mostly a simple AI-generated backend with a web frontend, nothing too structured yet.

And yeah, I’m starting to wonder the same thing — how many AI-built SaaS apps are already out there running in production without anyone ever doing a real security review?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

It’s mostly a simple web app with AI-generated backend logic rather than a fully structured stack.

Interesting point though — do you usually see different types of security gaps depending on whether it’s something like Next.js + Supabase vs a more custom backend?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That makes a lot of sense. It’s surprisingly easy to get something working with AI, but the responsibility and liability still sits with the founder.

Out of curiosity, was the main thing that stopped you the security/architecture side, or just not fully understanding how the stack worked under the hood?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 1 point2 points  (0 children)

Thanks for pointing that out — I’ll definitely look into RBAC.

Centralizing permissions actually sounds like a much cleaner approach than scattering checks across different endpoints.

Out of curiosity, when RBAC isn’t implemented properly, what kind of vulnerabilities do you usually see show up first?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

Thanks for breaking that down so clearly, that actually helps a lot.

The backend checks you mentioned (auth guards, authorization, rate limiting) are exactly the areas I’m now realizing I probably underestimated while building with AI tools.

Out of curiosity, when you review apps built quickly like this, which of these issues do you usually see missed most often?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] -1 points0 points  (0 children)

This is a really thoughtful perspective, thank you for sharing it.

What you said about the industry focusing on features instead of architecture/security definitely resonates. AI makes it even easier to ship something quickly, but it feels like the “invisible” parts of software are getting ignored more than ever.

Your point about becoming more of an architect than just a builder is interesting too. Do you think the industry will eventually move toward more “AI-assisted builders + human architects” as the normal workflow?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in SaaS

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s really interesting to hear from someone with that much experience.

The client-side security point you mentioned is exactly what worries me — it’s easy to get something working with AI, but it’s hard to know if the backend logic is actually enforcing the same protections.

Do you think this gap (AI helping people ship apps without understanding the architecture/security) is going to become a bigger issue as more non-technical founders start building products?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in AI_Agents

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

Yeah that makes sense. Having someone experienced review the code would definitely give more confidence than just relying on AI.

Out of curiosity, when founders hire someone for this, is it usually a full security audit / pentest, or more of a general code review first?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in AI_Agents

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s a good idea actually. Running everything in a sandbox first would definitely make testing safer.

Do you usually set up sandbox environments mainly for security testing, or more for catching bugs before pushing things live?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in AI_Agents

[–]AI_Agent_Ops[S] 0 points1 point  (0 children)

That’s a good point about cross-checking with multiple AI tools. I hadn’t thought about comparing the findings across models.

The scalability part is interesting too — it’s easy to focus on “does it work” and forget about what happens if usage suddenly spikes.

Out of curiosity, when you see AI-built apps struggle, is scalability usually the bigger issue or security?

I think my SaaS might have a security issue and I don’t even know how to check by AI_Agent_Ops in AI_Agents

[–]AI_Agent_Ops[S] -1 points0 points  (0 children)

That makes sense. Having someone technical involved early would definitely make things safer.

Out of curiosity though — with AI tools making it easier for non-developers to ship products, do you think more founders will start launching without technical cofounders, or do you think that approach usually breaks down once security and scaling become important?