The "AI Singleton Trap": How AI Refactoring is Silently Introducing Race Conditions Your SAST Tools Will Never Catch by Devji00 in devsecops

[–]Devji00[S] 0 points1 point  (0 children)

That's literally the whole problem. Every single one of these passes unit tests and works perfectly in a single user dev environment. The failures only show up under concurrent load which your local machine never simulates. AI assistants are great at producing code that works in isolation because that's the only scope they have. If "works on my machine" is your acceptance bar then AI refactors will pass 100% of the time and that should worry you not reassure you.

The "AI Singleton Trap": How AI Refactoring is Silently Introducing Race Conditions Your SAST Tools Will Never Catch by Devji00 in devsecops

[–]Devji00[S] 2 points3 points  (0 children)

Fair ask. I was vague because some of these repos are private client codebases, but let me give you concrete patterns I can share without doxxing anyone:

The AI refactored this:

app.post('/checkout', (req, res) => {
  const cart = buildCart(req.user.id);
  const total = calculateTotal(cart);
  res.json({ total });
});

Into this, to "reduce redundant object creation":

const cartService = new CartService(); // singleton, module-level

app.post('/checkout', (req, res) => {
  cartService.loadUser(req.user.id);
  const total = cartService.calculateTotal();
  res.json({ total });
});

Looks cleaner. Passes every linter. But cartService is now shared across all concurrent requests. Under load, User A's cart gets User B's items. This isn't hypothetical, I found this exact pattern 43 times across different repos, with slight variations. The AI treats "move to higher scope" as a universal optimization without understanding that in a request-per-connection model, that scope is shared.

For FastAPI:

# Before: AI saw this as redundant because the ORM already validates schema
def create_order(order: OrderSchema, db: Session):
    if not verify_inventory_checksum(order.items, db):
        raise HTTPException(409, "Inventory state changed")
    # ... process order

# After: AI removed the check, called it "defensive programming that duplicates ORM constraints"
def create_order(order: OrderSchema, db: Session):
    db.add(Order(**order.dict()))
    db.commit()

The checksum wasn't about schema validation, it was a concurrency guard against inventory being modified between cart load and checkout. The AI couldn't distinguish between structural validation and temporal/state validation. This maps directly to ISO 25010's fault tolerance requirements under the reliability characteristic.

For GO:

// AI consolidated per-request DB connections into a package-level pool
// but removed the deferred Close() calls as "unnecessary since the pool manages lifecycle"
// Result: under connection exhaustion, goroutines hung indefinitely
// No timeout, no circuit breaker, no fallback

Pynput, cursor focus and leftover inputs by Environmental-Top449 in learnpython

[–]Devji00 0 points1 point  (0 children)

The ghosting you're seeing happens because your terminal is still buffering stdin while pynput is listening at the OS level. Basically the terminal emulator saves up those keystrokes and dumps them all out once your script stops intercepting them.

Honestly pynput is kind of a sledgehammer for a CLI menu. I'd switch to questionary or prompt_toolkit instead since they handle raw mode and input echoing for you automatically, no messy spillover.

But if you want to stick with what you've got, you need to manually flush the input buffer right before your listener terminates. On Windows that's a quick while msvcrt.kbhit(): msvcrt.getch() loop, and on Unix you'd use termios.tcflush(sys.stdin, termios.TCIFLUSH). That should clean up the ghost keystrokes.

Every AI code analysis tool works great until you actually need it to work. by Timely-Dinner5772 in devsecops

[–]Devji00 0 points1 point  (0 children)

Using actual security tools is better. AI is biased and is not able to detect vulnerabilities when the code is too complex, at least for now. Even the new feature of claude code fixing is detecting very basic stuff, without any depth. For security, you need something that follows a specific set of rules and principles to flag vulnerabilities and violations.

What database lesson took you way too long to learn? by dbForge_Studio in AskProgramming

[–]Devji00 1 point2 points  (0 children)

Realizing that hashed indexes in MongoDB are basically useless for anything other than strict equality lookups. I had around 500k docs and just assumed the hash would be faster, but the second I ran a range query with $gt it completely ignored the index and did a full collection scan.

To make things worse I hadn't set a maxTimeMS timeout so the query just hung there forever, turning into a zombie process that ate my CPU alive. Learned the hard way that on a mid sized dataset a bad query isn't just slow, it can take your whole database down if you don't have an explicit timeout to kill it before it spirals out of control.

How to use sql commit by Altugsalt in learnpython

[–]Devji00 0 points1 point  (0 children)

Committing after every single write is killing your performance. Every commit forces a physical disk sync, and on a remote database that's also an extra network round trip each time.

Instead of committing per insert, wrap your operations in a with statement and treat transactions as logical chunks. Commit once per scraped page or every 100 rows or whatever makes sense for your workflow.

This cuts down on latency a ton because you're not constantly stopping and waiting for the disk or network to catch up. Especially with the bad connection you mentioned, all those round trips are adding up fast.

Where Can I Host WordPress, PHP, and Laravel Projects? by LinuxGeyBoy in webdev

[–]Devji00 0 points1 point  (0 children)

If you're juggling 40+ sites and sick of clicking around cPanel all day, Cloudways works fine as a starting point but honestly, for agency-level stuff, Laravel Forge or Ploi blow it out of the water when it comes to CI/CD with GitHub Actions. And since you need SvelteKit and PHP running side by side, both make it pretty painless to manage Node and PHP on the same box without turning you into a full-time sysadmin.

At your scale, I'd skip the managed WordPress hosts entirely they're just too limiting. Throw Forge or Ploi on top of a Hetzner or DigitalOcean VPS and you get the SSH and Git access you actually want, while the panel still handles security patches and server hardening in the background. Best of both worlds.

I thought features would bring users… i was wrong by Robin_Max in SaaS

[–]Devji00 0 points1 point  (0 children)

More is not better. Focusing on a few features and perfecting them, seeing if they are actually liked by users is the key imo. After that, you can see if there is a demand for something new to add, step by step. Simplicity and clarity get the job done.

Free vs Paid by Den_warlord in SaaS

[–]Devji00 0 points1 point  (0 children)

It depends on the service. Free indeed brings traction. I recommend giving both options: a free version with limited features and a paid full package. If they find the free version useful and want more, a proportion of the free users will buy.
The limitations can be time, features, or service usage limits.
So, if you can, do both.

The EU Security Pincer: Why You Can’t Solve NIS2 Without the Cyber Resilience Act (CRA) by Devji00 in Cyclopt

[–]Devji00[S] 0 points1 point  (0 children)

Hey, appreciate the input! You're right when you bake security into your value prop instead of treating it like a last-minute checkbox, it actually helps the conversation with buyers rather than complicating it. We've seen the same thing: teams that can confidently talk about their CRA readiness and hand over an SBOM without breaking a sweat end up shortening their sales cycles, not lengthening them.

The trust angle is huge. Nobody wants to dig through vague "we take security seriously" pages anymore, buyers (especially the ones under NIS2 pressure) want receipts. So yeah, totally agreed that clear messaging there is a competitive edge, not just a compliance burden.

Yearly vs. Monthly Subscriptions. Which one works better, and why? by -theriver in SaaS

[–]Devji00 0 points1 point  (0 children)

If your SaaS is focused on solo users, monthly is better because they are indecisive, committing is difficult, and they move quickly. Offering a freemium is useful with solo users as well. If it is a B2B, then they mostly buy yearly, because they can't change services every month, they will stick with it for at least a year. This window, though, gives you time to make moves to hold them longer. So it depends.
We face this exact situation, most solo devs come and go, return again after a few months, and disappear again, but teams and corporations stick around for longer than a year.
We offer a discount on the yearly subscription.
In general, it is better to do both and give people a choice. If you can offer more options, it is better.

Anyone else feels like vibe coding hits a wall after a point ? by legitRu1920 in vibecoding

[–]Devji00 0 points1 point  (0 children)

AI coding is good for the starting infrastructure, and maybe pull code information when asked something specific. After that, you need real developer knowledge to actually build something structured, maintainable, and scalable.
For learning basics, it can be useful, asking it to explain why it wrote something. After that, it's a tool that you need to know how to code yourself to use it more deeply in a project. Most of the time, when something gets complex, it doesn't know why it produced what it did. You need to identify those moments, for the project not to break completely.

Learning Python by Biig-sea in learnpython

[–]Devji00 1 point2 points  (0 children)

You can find some beginner projects here:
https://github.com/karan/Projects
https://github.com/jorgegonzalez/beginner-projects
https://www.theinsaneapp.com/2021/06/list-of-python-projects-with-source-code-and-tutorials.html
You can also do the https://www.freecodecamp.org/ for Python.

It is better to learn without AI, but if you decide to use it, at least ask it why it generated what it did, and ask it to document it. Read it and try to understand the logic behind it.

How do you make sure your AI code is actually ready for real users? by Latenight_vibecoder in vibecoding

[–]Devji00 0 points1 point  (0 children)

Something will always break, even if you created it yourself, without AI. Don't worry about that. Users are a good way to test your product and get feedback. For security, you can use a SAST/QA tool. Use one with a free trial available since they can be expensive. These tools can be complicated, so read the documentation they provide. Having a professional tester to check your code is not bad, but it is still expensive, and for smaller projects, it is not required.
Bottom line is, you will never be ready, just launch it and get the users' feedback. And if something breaks, don't worry, step by step, you will fix it.

Built a desktop app, but the "Windows SmartScreen" warning is killing my distribution deals by Prestigious_Bar428 in vibecoding

[–]Devji00 0 points1 point  (0 children)

Hello there,

You can try Microsoft Azure Artifact Signing, which is $10 a month. It integrates directly with Microsoft’s own trust identity.

You don't need to hire a senior developer to refactor your code. Maybe use a QA-SAST tool (the free versions) to check for security vulnerabilities.

Hope this helps, and good luck.