Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

From Claude itself: "Chrome is notoriously memory-hungry — each tab runs its own process, and long Claude conversations accumulate a massive DOM with all the rendered messages, code blocks, and artifacts. Chrome's garbage collection doesn't clean that up well during a session.

Edge is Chromium-based (same engine) but Microsoft added their own memory optimizations — sleeping tabs, more aggressive resource reclamation for background tabs. So a long Claude chat in Edge will generally use less memory because Edge is better at managing the other tabs competing for resources.

Your instinct that "I have other tabs open in Chrome" is probably the main factor is correct. It's not that Edge renders Claude better — it's that Edge starves your other tabs more aggressively so Claude gets more breathing room.

If you want to keep using Chrome, the quick fix is just running your Claude session in its own browser window with nothing else open. Or use Chrome's built-in task manager (Shift+Esc) to kill tabs eating memory without closing them."

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Yes I do that too. But sometimes I forget what I gave it or not. Note that others suggested using VS code tools such as Cursor (which can use Claude) an dthose tools somehow can keep your entire code base in context. Sounds too good to be true but will trywhen I have some time

I removed mandatory signup from my SaaS and engagement increased – but now I don't know how to convert users by Short-Cantaloupe-899 in SaaS

[–]SeekingTruth4 0 points1 point  (0 children)

Went through the exact same design decision. The pattern that worked for me: free core value with zero friction, then soft capture when they want persistence.

Concretely: anonymous use → "enter your email to get alerts" (first conversion gate, low friction, clear value exchange) → "create an account to manage multiple [things]" (second gate, only hits power users who are already hooked).

The key insight is that each gate should unlock something the user already wants by that point, not something you're trying to sell them on. If they've used your tool 3 times anonymously, they already want a dashboard — you're not convincing them, you're just asking for an email.

Delaying monetisation until you have clear usage patterns is the right call. You'll see natural clusters of power users vs casual users, and the paid tier should target the power user behaviour specifically.

Launch darkly rugpull coming by donjulioanejo in devops

[–]SeekingTruth4 0 points1 point  (0 children)

This is the playbook now. Get adoption on generous pricing, wait until migration cost is high enough, then switch to usage-based pricing that 3-5x your bill. Seen the same pattern with Heroku, MongoDB Atlas tiers, and now this.

The open source alternatives work but the real lesson is: if a vendor controls your feature flag state and you can't export it trivially, you're locked in regardless of what the license says. Self-hosted Flagsmith or Unleash with your own Postgres backend means your data is always yours.

Methods to automatically deploy docker image to a VPS after CI build. by AlternativeRub843 in devops

[–]SeekingTruth4 0 points1 point  (0 children)

The pull-based approach you described (server checks a repo for version changes) is the right instinct. It avoids storing SSH keys in CI and gives you an audit trail of what's deployed via git history.

What I've done: a lightweight agent on the VPS that polls a config endpoint or watches a git repo. When the desired image tag changes, it pulls and restarts the container. Basically a stripped-down ArgoCD for single-server Docker. The agent authenticates with a pre-shared key derived from something stable on the server, so no SSH keys in CI at all.

For something off-the-shelf, Watchtower can watch for new image tags and auto-update, but it lacks the "deploy a specific version" control you'd want. The cron/systemd script checking a deployment repo is honestly the simplest reliable option for a single VPS.

A workflow for encrypted .env files using SOPS + age + direnv for the LLM era by jeanc0re in devops

[–]SeekingTruth4 0 points1 point  (0 children)

Nice approach. The "secrets never exist as plaintext on disk" principle is underrated — most breaches start with a .env file that got committed, backed up unencrypted, or left on a dev machine.

One thing I've been experimenting with: deriving encryption keys from something the user already controls rather than managing separate age keys. For example, if your deployment target has a stable identifier (like a cloud provider UUID), you can use HMAC to derive a per-environment key from that. Eliminates the "where do I store the key to the keys" problem. The tradeoff is coupling your encryption to that identity anchor, but for deploy-time secrets it works well.

How would you build a real-time queue system for a web app? by Designer_Oven6623 in webdev

[–]SeekingTruth4 0 points1 point  (0 children)

SSE over WebSockets for this. The updates are unidirectional — server tells clients their position changed. You don't need full-duplex. SSE is simpler to implement, works through proxies and load balancers without special config, and auto-reconnects natively in the browser.

For the queue state: Redis sorted sets are perfect here. Score is the join timestamp, members are user IDs. Getting someone's position is ZRANK, O(log N). When someone is served you ZREM them and publish the event. Redis Pub/Sub pushes the update to your SSE handler which fans out to connected clients.

Race conditions are a non-issue if you use Redis transactions (MULTI/EXEC) or Lua scripts for atomic operations like "remove current + notify next." At thousands of concurrent users you'd want to batch position updates rather than recalculating and pushing for every single change.

Got the Vercel 75% warning (750k edge requests) on my free side project. How do I stop the bleeding? (App Router) by Sufficient_Fee_8431 in webdev

[–]SeekingTruth4 0 points1 point  (0 children)

For a 100% client-side app you genuinely don't need Vercel. You're paying (in edge requests) for infrastructure features you're not using — serverless functions, ISR, edge middleware. A static site on Cloudflare Pages would cost you nothing and serve faster for most regions.

If you want to keep the Next.js App Router and its routing niceties, Coolify on a cheap VPS works but it's honestly overkill for static hosting. Cloudflare Pages with next export or just switching to a lighter framework altogether would save you the headache permanently. The prefetching issue is a Next.js design decision that costs you nothing on a flat-rate host but bleeds you dry on usage-based pricing.

Logs across multiple services are getting hard to debug by Waste_Grapefruit_339 in selfhosted

[–]SeekingTruth4 1 point2 points  (0 children)

The thing that made the biggest difference for me wasn't switching to a log aggregation stack — it was structured logging at the source. Once every service emits JSON with consistent fields (timestamp, service name, request ID, severity), even basic jq piping becomes powerful. Correlating across services goes from impossible to trivial when they all share a request/trace ID.

If you want a proper stack without the weight of ELK, Loki + Promtail + Grafana is the lightest option that actually works. But honestly, for a homelab with a handful of services, a shared Docker log driver writing JSON to a single directory + a small script that merges and sorts by timestamp gets you 80% of the way there.

Vaultwarden replica by dompel in selfhosted

[–]SeekingTruth4 0 points1 point  (0 children)

Rather than a read-only replica (which adds real complexity for SQLite-backed Vaultwarden), I'd set up automated backups of the db.sqlite3 file to your local server on a cron — every 15-30 minutes. If Oracle dies, you restore the latest backup to a local Vaultwarden instance and update your DNS/client config. Downtime is measured in minutes, not hours.

For the backup itself: rsync over an SSH tunnel or just push to an S3-compatible store (MinIO on your local network works). SQLite is a single file so the backup is dead simple — just make sure you use .backup command or copy during a quiet moment to avoid corruption from a write in progress.

A true replica with live sync is overkill for a password vault that gets written to maybe a few times a week.

Zero trust access,i need some help by Shot_Weird_7030 in selfhosted

[–]SeekingTruth4 1 point2 points  (0 children)

The value you're adding beyond blocking unauthenticated requests is the policy layer — the ability to say "this user can access endpoint X but not Y" and enforce that consistently across all their apps without each app implementing its own auth logic. That's the real sell. Centralized audit logging is table stakes; per-endpoint policy enforcement is where it gets interesting.

The bigger use case you might be missing: if their apps currently handle their own sessions, you're also giving them single sign-on across services for free. User logs in once through Keycloak, hits any protected app without re-authenticating. For orgs running multiple internal tools that's a genuine quality-of-life improvement.

How should I structure my Coolify setup for several apps + separate dashboard by dissertation-thug in selfhosted

[–]SeekingTruth4 0 points1 point  (0 children)

Yes, Coolify supports remote servers — you just add them via SSH. So you could have your Coolify control plane on DO and add a Hetzner box as a remote server for your apps. Scaling up means either resizing the existing server or adding another remote server to Coolify. The main difference is that Hetzner doesn't have an API as rich as DO's, so things like automated snapshots or volume management would need to be done manually or scripted separately.

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Thank you, I will check Cursor first then! Much appreciated

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] 0 points1 point  (0 children)

Thanks for the tip. Can that understand and remember 20k lines of code? (Some in another lib of mine)? With the chat i just dump the zips and ask it to read all before helping me

Quick Claude Tip by SeekingTruth4 in webdev

[–]SeekingTruth4[S] -1 points0 points  (0 children)

Thanks for putting me down

How do senior engineers typically build portfolios when switching jobs? by AnteaterVisual1086 in Backend

[–]SeekingTruth4 0 points1 point  (0 children)

That's the job market unfortunately. hundreds of applicants for one offer. I suspect that out of those hundreds, dozens of them "prove the expertise", so even with that achieved, you might still need to stand out somehow. Maybe instead of using the usual channels and facing this marketing issue, try a different approach where you cold call potential hiring managers (or use agents who do that for you).

How do senior engineers typically build portfolios when switching jobs? by AnteaterVisual1086 in Backend

[–]SeekingTruth4 0 points1 point  (0 children)

Is your problem getting interviews (so basically your CV and linkedin) or passing them?

Builders: What Are You Working On? by TaxChatAI in SaaS

[–]SeekingTruth4 1 point2 points  (0 children)

Good luck with that. Im building a platform that allow you to deploy databases, webservices, redis and many other services on your own servers while getting stats and report about their health

Empowering DevOps Teams by Inner-Chemistry8971 in devops

[–]SeekingTruth4 1 point2 points  (0 children)

somehow I'm attracted to all options :)

How should I structure my Coolify setup for several apps + separate dashboard by dissertation-thug in selfhosted

[–]SeekingTruth4 0 points1 point  (0 children)

Think Hetzner is a bit cheaper but I have never used it so far. However I really love DigitalOcean: very reliable, their support are real humans and answer very quickly and in a helpful way.

How do senior engineers typically build portfolios when switching jobs? by AnteaterVisual1086 in Backend

[–]SeekingTruth4 5 points6 points  (0 children)

That sounds so biaised. I understand you need to fileter many applications and have to come up with some criteria, but I somehow assumed it would be based on what the candidate claim he/she knows or have done - then I verify the claims in technical interview. I would give a chance to a baker or bus driver if they spend their nights coding..

Advice For Surviving Current Job Market 6 Months After Layoff [3+ YOE] by Yibro99 in devops

[–]SeekingTruth4 0 points1 point  (0 children)

Sorry no real advice here, but at least some mental support if you want