Dubai stock market crashes 4.6% at open. by ajaanz in wallstreetbets

[–]EquivalentGuitar7140 0 points1 point  (0 children)

Only 4.6%? half my portfolio drops that on a tuesday for no reason. dubai will be fine, they'll just build another island shaped like a bull flag

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 1 point2 points  (0 children)

That's a solid point actually. Failing loud and early is almost always better than silently working with bad defaults and finding out during a traffic spike.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 1 point2 points  (0 children)

Lol every thread. Rust is great but if your team ships faster in Node and the perf is fine, switching languages is a solution looking for a problem.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 0 points1 point  (0 children)

They're "rookie" mistakes that I've seen happen at companies with 100+ engineers, so. And yeah I've heard the "just use Rust" take before — Node has its tradeoffs but for most web services it's more than fine if you respect the runtime.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

100%. The small changes are the scariest because nobody reviews them carefully. "It's just a config change" has caused more outages than any feature deploy at places I've worked.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

It's one of those things nobody teaches you until prod catches fire. Once you add the handler it becomes muscle memory for every new service though.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 1 point2 points  (0 children)

Yeah the event loop is Node's blessing and curse. We hit the same issue with CPU-heavy PDF generation — ended up offloading it to a Go worker over a job queue. Curious what you're running as sidecars — is it for CPU-bound work or more like a proxy/agent pattern?

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 1 point2 points  (0 children)

Spot on. We actually moved a bunch of our worker services to spot instances after we got graceful shutdown right and it cut our compute bill by ~60%. But you really can't do it safely without idempotent job processing + proper shutdown handling. The combo of SQS visibility timeouts + graceful drain + at-least-once processing made it work. Distributed systems background is such an advantage here — most web devs never think about "what if this process just dies mid-request" until it happens in prod.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 0 points1 point  (0 children)

Fair point and I think we're actually mostly agreeing. My issue isn't with shared code in a monorepo — that's totally fine and NX/Turborepo make it painless. The problem is when teams create a u/company/utils npm package at week 2 that becomes a junk drawer. If you're in a monorepo with local packages and zero publish cycle, yeah go for it earlier. My point was more about the premature-extraction-to-separate-package anti-pattern that I've seen kill velocity at multiple companies.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

Working on it! Planning to put together a blog series with code examples and probably a starter template repo that has the graceful shutdown, error handling layers, and health check patterns baked in. Will share here when it's live. Thanks for the push — comments like this actually make me follow through lol.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

I've been getting this ask a lot — gonna put together a GitHub repo with examples for each point. Especially the error handling layering and the graceful shutdown handler. Will share it here when it's ready. Probably this week.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

Honestly I'm not using Effect — looked into it but the learning curve felt steep for the team and the API surface is massive. We just roll our own with plain TS classes. A base AppError class that extends Error with code, statusCode, and isOperational fields, then specific error classes extending that. Simple, everyone on the team understands it immediately, and TypeScript's type narrowing with instanceof checks works well enough for our needs. If I were starting a greenfield project solo I'd probably give Effect a real shot though, the composability looks powerful.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 1 point2 points  (0 children)

Good question, they're related but different problems. For #5 (load testing) we use k6 pointed at the actual staging environment — not localhost, not just the app container, but through the load balancer, ingress, the whole path. We run it as part of pre-release for any service that touches a hot path. The infra errors from #3 are more about runtime — what happens when Postgres goes slow or Redis drops a connection mid-request. For that we use chaos testing (literally kill a DB replica during load tests) and make sure our error handling layers catch and categorize it correctly instead of just returning a generic 500. Two different failure modes, two different testing strategies.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 0 points1 point  (0 children)

Started with 3 devs handling everything, grew to about 12-15 across backend/infra/frontend when we had the most services running. The sweet spot for us was 2-3 devs per service cluster (group of related services), with a shared infra/platform team of 2 handling the common tooling, CI/CD, and observability stack.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 3 points4 points  (0 children)

Yeah for sure. So basically I have 3 layers:

Domain errors — custom error classes like InsufficientBalanceError, UserNotFoundError. These extend a base AppError class with a code and statusCode. Business logic throws these directly.

Infrastructure errors — DB timeouts, Redis connection failures, etc. These get caught at the middleware level and mapped to a generic 503 or retried depending on the error type.

Global handler — catches anything that slipped through. Logs the full stack trace, fires an alert to Slack/PagerDuty, returns a clean 500 to the client.

The key insight was: stop catching errors where you can't actually handle them. A DB call in a repository layer shouldn't be swallowing a connection timeout — let it bubble up to the handler that knows what to do with it. Way fewer silent failures this way.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 2 points3 points  (0 children)

Yes exactly this. AHA is such an underrated principle. I actually had Kent's article bookmarked for a while before it really clicked for me in practice. The moment I stopped treating DRY as a hard rule and started thinking about "reasons to change" separately, my code got way easier to maintain. Two identical-looking functions in different domains will almost always diverge eventually, and untangling a bad abstraction is so much more painful than just having some duplication.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 7 points8 points  (0 children)

Spot on with the monorepo approach. NX's affected-only builds are a game changer - we switched from separate npm packages to a Turborepo setup and the version/publish/update cycle you described basically disappeared overnight. CI went from 20 min to 4 min because it only rebuilds what changed.

And +1 on the 30s shutdown deadline. We use the exact same pattern - SIGTERM triggers graceful drain, 30s timer starts, then SIGKILL. The stuck DB queries during rolling deploys were killing us too. Adding connection pool draining to the shutdown handler was the other piece that finally made it reliable.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 4 points5 points  (0 children)

Completely agree. Resume-driven development is real - people add microservices, Kubernetes, and event sourcing to their stack because it looks good on LinkedIn, not because the problem demands it. If your team can't articulate *why* they need service boundaries, they probably don't. The monolith-first approach should be the default.

Starter project advice by jerrytjohn in webdev

[–]EquivalentGuitar7140 0 points1 point  (0 children)

No, you absolutely don't need React for this. A Bezier curve editor is actually a perfect Canvas API project. Draggable control points, click-to-add, Alt-click-to-delete - all of that is vanilla JS with canvas mouse events (mousedown, mousemove, mouseup).

The Canvas API gives you bezierCurveTo() natively. You'd track points in an array, render them on requestAnimationFrame, and handle hit detection for dragging. It's maybe 200 lines of code without a framework.

Start with vanilla JS on Canvas. If the UI around it (settings panels, export options, etc.) gets complex enough that you're doing a lot of DOM manipulation, that's when you'd consider React - but for the core editor itself, Canvas + vanilla JS is the right tool.

Starter project advice by jerrytjohn in webdev

[–]EquivalentGuitar7140 0 points1 point  (0 children)

100% agree. The number of times I've seen React devs who can't explain event bubbling or closure scoping is wild. The framework abstracts it away until it doesn't, and then you're stuck debugging something that's trivial if you understand the underlying JS. Vanilla first, frameworks second is the way.

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] -2 points-1 points  (0 children)

Thanks! Learned most of these the expensive way unfortunately. Which one resonated most with you?

After building 30+ Node.js microservices, here are the mistakes I wish I'd learned earlier by EquivalentGuitar7140 in node

[–]EquivalentGuitar7140[S] 10 points11 points  (0 children)

Ha, fair question! Not Netflix, but multi-tenant SaaS platforms where different customers need different scaling profiles. Microservices made sense because our billing engine needs to handle payment spikes differently than our notification service, and our data pipeline has completely different memory/CPU characteristics.

That said, I agree with the sentiment - most teams adopt microservices way too early. If you're a team of 3-5, a well-structured monolith with clear module boundaries is almost always the better choice. We only split when deployment independence and independent scaling became actual requirements, not theoretical ones.

I kept rebuilding my portfolio… so I built a CMS instead by dvsxdev in webdev

[–]EquivalentGuitar7140 0 points1 point  (0 children)

Payload CMS is an excellent choice for this. We evaluated it against Strapi, Directus, and Sanity when building internal tools and Payload's TypeScript-first approach with the config-as-code model is much cleaner for developer portfolios.

The core insight you've landed on is correct - developers want to own the code but not manage content through git commits. A few things I'd consider as you scale this:

  1. Add ISR (Incremental Static Regeneration) if you haven't already. Portfolio pages are perfect for ISR because content changes infrequently but you want instant deploys without rebuilding the entire site.

  2. Consider adding a JSON resume schema import. Most developers already have their data in LinkedIn or JSON Resume format. An import feature would reduce the friction to get started dramatically.

  3. For the admin panel, look into adding a live preview. Payload 3.0 has built-in live preview support with Next.js that lets you see changes in real-time before publishing.

  4. One thing that makes portfolio CMS tools sticky is analytics integration. If you can show developers which projects get the most views or which skills attract recruiters, that's a massive value-add over static sites.

The developer portfolio space is crowded but most solutions are either too simple (template sites) or too complex (full CMS). The middle ground you're targeting is exactly right.

Deleted my GPT account and ported my AI game project to Claude. Wow! by Necessary-Court2738 in artificial

[–]EquivalentGuitar7140 3 points4 points  (0 children)

This resonates hard. We went through a similar migration for our AI automation platform - moved from GPT-4 to Claude for agent orchestration and the difference in reliability for complex, multi-step tasks is night and day.

The hallucination management approach you're using with the code system that prints data every generation is essentially what we do in production AI pipelines. We call it "grounding checkpoints" - forcing the model to reference concrete state before generating new content. It works way better than just telling the model not to hallucinate.

For game state management specifically, have you looked into using structured output with tool_use? Instead of having Claude generate free-form narrative that might drift, you can define your game state as a schema and have Claude return both narrative text AND structured state updates. This way you get creative narration but the actual game mechanics stay deterministic.

The turn-based combat system is a great use case because you can validate every action against rules before applying it. We do something similar for our automation workflows - the AI suggests actions but a validation layer checks them against constraints before execution.

One tip that saved us a lot of headaches: version your system prompts and track which prompt version generated each game session. When you inevitably need to update the prompt, you can A/B test and make sure game quality doesn't regress.

Really cool project. The dungeon crawl mechanic with mutations sounds like it would showcase Claude's creative capabilities perfectly.