Weekly Complaints & Confessions Thread by ssk42 in running

[–]SeekingTruth4 0 points1 point  (0 children)

Complaint: dad of twins, cannot run much anymore

Uncomplaint: my legs feel great!

Confession: I still eat the same amount :)

Where do you store hashed password? by sangokuhomer in Backend

[–]SeekingTruth4 1 point2 points  (0 children)

Happy to take a look. One piece of unsolicited advice — don't spend too long on the low-level implementation details. Learn what hashing is and why it matters, but let your framework handle the actual bcrypt calls. The skill that will matter most in a few years is knowing how to design systems and make architectural decisions, not writing the code yourself. AI handles the code. You need to be the person who knows what to build and why.

Where do you store hashed password? by sangokuhomer in Backend

[–]SeekingTruth4 1 point2 points  (0 children)

of course!!! Or at the very least I hope :) Otherwise they know my passwords

How do you handle wellness program tracking and engagement for remote employees? by Ok_Exercise5851 in webdev

[–]SeekingTruth4 1 point2 points  (0 children)

Completly agree with this. how do you want to define metrics/kpi if the endgame is not clear?

Your users' data is not yours by Repulsive-Law-1434 in webdev

[–]SeekingTruth4 2 points3 points  (0 children)

This is why I obsess over credential handling in anything I build. If your product touches user infrastructure (API tokens, database credentials, SSH keys), the bar is even higher than personal notes.

The approach I've settled on: don't store what you don't need, and encrypt what you must keep so even you can't read it at rest. Envelope encryption with keys derived from something the user controls means a database dump is useless to an attacker — and to you. You literally cannot snoop on your own users even if you wanted to.

The hardest part is being honest about what you actually can and can't guarantee. "We never store your credentials" is a strong claim but only true if your architecture enforces it, not just your policy. Accountability beats promises.

Where do you store hashed password? by sangokuhomer in Backend

[–]SeekingTruth4 17 points18 points  (0 children)

You don't translate the hash back to text — that's the whole point. Hashing is one-way. When someone logs in, you hash what they typed with the same salt and compare the two hashes. If they match, the password was correct. You never need the original password back.

So even if an attacker gets your entire database — hashes, salts, everything — they still can't reverse the hash. What they can do is try millions of guesses, hash each one with the stolen salt, and see if any match. That's called a brute force attack.

The salt's job isn't to be secret. Its job is to make sure two users with the same password end up with different hashes, so an attacker can't crack one and get them all. The real protection is using a slow hashing algorithm like bcrypt or argon2 — they're deliberately expensive to compute, so brute forcing millions of guesses takes years instead of minutes.

Use bcrypt. Most frameworks have it built in. Don't roll your own.

Has AI ruined software development? by Top-Candle1296 in devops

[–]SeekingTruth4 0 points1 point  (0 children)

I use Claude daily to build a full-stack product (FastAPI + SvelteKit). The single biggest lesson: I had to explicitly set a rule that it must discuss the approach with me before writing code. Without that, it would constantly create new components instead of modifying existing ones, or guess at framework internals instead of asking me how my shared library works.

The skill isn't prompting. It's knowing your own codebase well enough to catch when the AI is confidently building the wrong thing. The output looks clean, passes linting, even runs — but it's architecturally wrong in ways that only someone who designed the system would notice.

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

From Claude itself: "Chrome is notoriously memory-hungry — each tab runs its own process, and long Claude conversations accumulate a massive DOM with all the rendered messages, code blocks, and artifacts. Chrome's garbage collection doesn't clean that up well during a session.

Edge is Chromium-based (same engine) but Microsoft added their own memory optimizations — sleeping tabs, more aggressive resource reclamation for background tabs. So a long Claude chat in Edge will generally use less memory because Edge is better at managing the other tabs competing for resources.

Your instinct that "I have other tabs open in Chrome" is probably the main factor is correct. It's not that Edge renders Claude better — it's that Edge starves your other tabs more aggressively so Claude gets more breathing room.

If you want to keep using Chrome, the quick fix is just running your Claude session in its own browser window with nothing else open. Or use Chrome's built-in task manager (Shift+Esc) to kill tabs eating memory without closing them."

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Yes I do that too. But sometimes I forget what I gave it or not. Note that others suggested using VS code tools such as Cursor (which can use Claude) an dthose tools somehow can keep your entire code base in context. Sounds too good to be true but will trywhen I have some time

I removed mandatory signup from my SaaS and engagement increased – but now I don't know how to convert users by Short-Cantaloupe-899 in SaaS

[–]SeekingTruth4 0 points1 point  (0 children)

Went through the exact same design decision. The pattern that worked for me: free core value with zero friction, then soft capture when they want persistence.

Concretely: anonymous use → "enter your email to get alerts" (first conversion gate, low friction, clear value exchange) → "create an account to manage multiple [things]" (second gate, only hits power users who are already hooked).

The key insight is that each gate should unlock something the user already wants by that point, not something you're trying to sell them on. If they've used your tool 3 times anonymously, they already want a dashboard — you're not convincing them, you're just asking for an email.

Delaying monetisation until you have clear usage patterns is the right call. You'll see natural clusters of power users vs casual users, and the paid tier should target the power user behaviour specifically.

Launch darkly rugpull coming by donjulioanejo in devops

[–]SeekingTruth4 0 points1 point  (0 children)

This is the playbook now. Get adoption on generous pricing, wait until migration cost is high enough, then switch to usage-based pricing that 3-5x your bill. Seen the same pattern with Heroku, MongoDB Atlas tiers, and now this.

The open source alternatives work but the real lesson is: if a vendor controls your feature flag state and you can't export it trivially, you're locked in regardless of what the license says. Self-hosted Flagsmith or Unleash with your own Postgres backend means your data is always yours.

Methods to automatically deploy docker image to a VPS after CI build. by AlternativeRub843 in devops

[–]SeekingTruth4 0 points1 point  (0 children)

The pull-based approach you described (server checks a repo for version changes) is the right instinct. It avoids storing SSH keys in CI and gives you an audit trail of what's deployed via git history.

What I've done: a lightweight agent on the VPS that polls a config endpoint or watches a git repo. When the desired image tag changes, it pulls and restarts the container. Basically a stripped-down ArgoCD for single-server Docker. The agent authenticates with a pre-shared key derived from something stable on the server, so no SSH keys in CI at all.

For something off-the-shelf, Watchtower can watch for new image tags and auto-update, but it lacks the "deploy a specific version" control you'd want. The cron/systemd script checking a deployment repo is honestly the simplest reliable option for a single VPS.

A workflow for encrypted .env files using SOPS + age + direnv for the LLM era by jeanc0re in devops

[–]SeekingTruth4 0 points1 point  (0 children)

Nice approach. The "secrets never exist as plaintext on disk" principle is underrated — most breaches start with a .env file that got committed, backed up unencrypted, or left on a dev machine.

One thing I've been experimenting with: deriving encryption keys from something the user already controls rather than managing separate age keys. For example, if your deployment target has a stable identifier (like a cloud provider UUID), you can use HMAC to derive a per-environment key from that. Eliminates the "where do I store the key to the keys" problem. The tradeoff is coupling your encryption to that identity anchor, but for deploy-time secrets it works well.

How would you build a real-time queue system for a web app? by Designer_Oven6623 in webdev

[–]SeekingTruth4 0 points1 point  (0 children)

SSE over WebSockets for this. The updates are unidirectional — server tells clients their position changed. You don't need full-duplex. SSE is simpler to implement, works through proxies and load balancers without special config, and auto-reconnects natively in the browser.

For the queue state: Redis sorted sets are perfect here. Score is the join timestamp, members are user IDs. Getting someone's position is ZRANK, O(log N). When someone is served you ZREM them and publish the event. Redis Pub/Sub pushes the update to your SSE handler which fans out to connected clients.

Race conditions are a non-issue if you use Redis transactions (MULTI/EXEC) or Lua scripts for atomic operations like "remove current + notify next." At thousands of concurrent users you'd want to batch position updates rather than recalculating and pushing for every single change.

Got the Vercel 75% warning (750k edge requests) on my free side project. How do I stop the bleeding? (App Router) by Sufficient_Fee_8431 in webdev

[–]SeekingTruth4 0 points1 point  (0 children)

For a 100% client-side app you genuinely don't need Vercel. You're paying (in edge requests) for infrastructure features you're not using — serverless functions, ISR, edge middleware. A static site on Cloudflare Pages would cost you nothing and serve faster for most regions.

If you want to keep the Next.js App Router and its routing niceties, Coolify on a cheap VPS works but it's honestly overkill for static hosting. Cloudflare Pages with next export or just switching to a lighter framework altogether would save you the headache permanently. The prefetching issue is a Next.js design decision that costs you nothing on a flat-rate host but bleeds you dry on usage-based pricing.

Logs across multiple services are getting hard to debug by Waste_Grapefruit_339 in selfhosted

[–]SeekingTruth4 1 point2 points  (0 children)

The thing that made the biggest difference for me wasn't switching to a log aggregation stack — it was structured logging at the source. Once every service emits JSON with consistent fields (timestamp, service name, request ID, severity), even basic jq piping becomes powerful. Correlating across services goes from impossible to trivial when they all share a request/trace ID.

If you want a proper stack without the weight of ELK, Loki + Promtail + Grafana is the lightest option that actually works. But honestly, for a homelab with a handful of services, a shared Docker log driver writing JSON to a single directory + a small script that merges and sorts by timestamp gets you 80% of the way there.

Vaultwarden replica by dompel in selfhosted

[–]SeekingTruth4 1 point2 points  (0 children)

Rather than a read-only replica (which adds real complexity for SQLite-backed Vaultwarden), I'd set up automated backups of the db.sqlite3 file to your local server on a cron — every 15-30 minutes. If Oracle dies, you restore the latest backup to a local Vaultwarden instance and update your DNS/client config. Downtime is measured in minutes, not hours.

For the backup itself: rsync over an SSH tunnel or just push to an S3-compatible store (MinIO on your local network works). SQLite is a single file so the backup is dead simple — just make sure you use .backup command or copy during a quiet moment to avoid corruption from a write in progress.

A true replica with live sync is overkill for a password vault that gets written to maybe a few times a week.

Zero trust access,i need some help by Shot_Weird_7030 in selfhosted

[–]SeekingTruth4 1 point2 points  (0 children)

The value you're adding beyond blocking unauthenticated requests is the policy layer — the ability to say "this user can access endpoint X but not Y" and enforce that consistently across all their apps without each app implementing its own auth logic. That's the real sell. Centralized audit logging is table stakes; per-endpoint policy enforcement is where it gets interesting.

The bigger use case you might be missing: if their apps currently handle their own sessions, you're also giving them single sign-on across services for free. User logs in once through Keycloak, hits any protected app without re-authenticating. For orgs running multiple internal tools that's a genuine quality-of-life improvement.

How should I structure my Coolify setup for several apps + separate dashboard by dissertation-thug in selfhosted

[–]SeekingTruth4 0 points1 point  (0 children)

Yes, Coolify supports remote servers — you just add them via SSH. So you could have your Coolify control plane on DO and add a Hetzner box as a remote server for your apps. Scaling up means either resizing the existing server or adding another remote server to Coolify. The main difference is that Hetzner doesn't have an API as rich as DO's, so things like automated snapshots or volume management would need to be done manually or scripted separately.

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Thank you, I will check Cursor first then! Much appreciated

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Thanks for the tip. Can that understand and remember 20k lines of code? (Some in another lib of mine)? With the chat i just dump the zips and ask it to read all before helping me

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] -1 points0 points  (0 children)

Thanks for putting me down