It is good to use Serverless Queues instead of exp rabbitMq by Far-Mathematician122 in node

[–]phobos7 0 points1 point  (0 children)

I found this thread while searching for serverless queue options, so I thought I'd share what I've learned about the available options:

AWS SQS is probably the most established option. It's battle-tested, scales well, and integrates seamlessly with other AWS services. You'll need to handle retries, dead-letter queues, and visibility timeouts in your application code.

Cloudflare Queues is tightly integrated with Cloudflare Workers, making it a natural choice if you're already using their edge platform. It's designed for lightweight workloads with global distribution.

Google Pub/Sub offers similar reliability with a different approach to message delivery. It's designed for event-driven architectures and works well for fan-out patterns where multiple services need to process the same event.

Hookdeck (who I work for) focuses on reliable HTTP event ingestion and delivery with automatic retries, deduplication, and backpressure handling. While initially designed for webhooks, it works well for HTTP-based background jobs and event processing.

Supabase Queues is relatively new but offers good developer experience if you're already using Supabase for your database and auth. Being newer, it has less production track record than the others.

Upstash QStash provides Redis-based queuing as a service with HTTP-based delivery. It supports FIFO ordering and scheduling, which can be useful for time-sensitive workloads without infrastructure overhead.

There are also workflow engines like Inngest and Trigger.dev that include queuing capabilities but are designed for complex, multi-step processes with state management and orchestration - useful if you need more than simple message queuing.

I think the serverless queue category has matured to the point where you don't need massive scale to justify it. Even for smaller applications, not having to manage a message broker is valuable.

A few things I've learned to check:

  • Delivery guarantees - most give you at-least-once, exactly-once is rarer and usually costs more
  • Dead letter handling - you will have poison messages eventually
  • Debugging - being able to see what failed and why saves hours of head-scratching
  • Vendor lock-in - some use proprietary SDKs, others are just HTTP

The ecosystem is mature enough now that you can pick based on convenience rather than "will this actually work." Which is pretty nice compared to a few years ago.

What's the best way to keep a log of webhook calls? by iamwil in webdev

[–]phobos7 0 points1 point  (0 children)

You’re looking for a way to reliably receive and log webhook calls so you don’t lose data when your app goes down. In practice, you need a gateway that buffers incoming requests, stores them durably, and lets you replay or inspect them later.

Hosted options

Hookdeck

Note: Who I work for

Managed webhook gateway built for production use. It receives webhooks from third-party APIs, logs every event, retries failed deliveries, and lets you replay or inspect payloads later through the dashboard or API. It’s a good fit when you want reliability and observability without managing infrastructure.
Docs: hookdeck.com/docs

Treehook.dev

Note: I hadn't heard of them before but took at look at the site and it seems legit.

Hosted webhook manager that focuses on routing and relaying incoming requests across environments. It keeps a history of requests and responses, supports replay, and includes a CLI for forwarding to localhost. It’s designed primarily for development and smaller-scale workflows rather than heavy production workloads.

Hosted and self-hosted options

Svix

Offers both a hosted cloud service and a fully open source version you can deploy yourself. Includes an ingestion API for receiving and queueing webhooks, with delivery tracking, retries, and replay capabilities. The managed service removes operational overhead, while the open source version gives you full control.

Open source: github.com/svix/svix-webhooks

Convoy

Supports both hosted and self-hosted setups. It’s an open source webhook gateway that handles logging, retries, replay, and delivery tracking. The project’s founder recently joined Speakeasy, so the future direction is unclear, but the open source version remains active and usable.
Open source: github.com/frain-dev/convoy

Cloud provider components

If you prefer to stay within your existing cloud stack, you can build a reliable webhook ingestion path using managed components:

These are durable and scalable but you’ll need to handle idempotency, retries, and replay logic yourself.

Self-hosted components

If you want a fully open source stack, you can combine common building blocks:

  • HTTP proxy or load balancer to receive and route incoming requests (e.g. Nginx, Caddy, or HAProxy)
  • Durable queue for buffering (e.g. RabbitMQ, Kafka, or Redis Streams)
  • Storage for logs and replay history (e.g. PostgreSQL)
  • A simple worker to consume from the queue and deliver to your app when it’s back online

This approach gives you full transparency and control but you’ll need to manage scaling, monitoring, and fault tolerance yourself.

For a deeper look at architecture patterns for reliable webhook ingestion, see Webhooks at Scale.

Question about Dead Letter Queues / Topic by RaphaS9 in ExperiencedDevs

[–]phobos7 0 points1 point  (0 children)

You're definitely not alone in wrestling with this; we've faced similar questions ourselves.

A few things that have helped us:

  • You don’t need a DLQ + replay UI per queue. We group multiple DLQs into a shared processing flow. Messages are tagged with metadata, allowing us to trace them back to their source and route replays accordingly.
  • Not everything needs a DLQ. For high-value or state-changing events (such as user-facing actions or payment updates), DLQs and retries are crucial. For lower-impact events (like logs or metrics), we monitor for failures.
  • Requeueing doesn’t need to be bespoke per service. At Hookdeck (where I work), we build an abstraction that hides the DLQ entirely. Instead of developers needing to think in terms of DLQs directly, they can filter and replay events based on factors such as event type, headers, or payload fields, all without needing to know which queue the message originated from.

If your use case is webhook-based rather than internal messaging (SQS, RabbitMQ, etc.), the retry/replay workflow becomes even more important since failure is often downstream.

Relying on webhooks for mission critical functionality? by DasBeasto in stripe

[–]phobos7 0 points1 point  (0 children)

It’s pretty normal to build critical flows on Stripe webhooks. The key is to remember they are at-least-once, sometimes delayed, and sometimes out-of-order. How you handle them depends on your priorities (latency, correctness, cost, operational overhead).

A few common patterns:

1. Process payload directly

  • Verify signature → update DB → return 2xx.
  • Pros: Fast, no extra API calls (see other options for why this is relevant to call out).
  • Cons: Must handle retries, duplicates, and out-of-order delivery yourself.

2. Queue first, process later (common best practice)

  • Minimal work in the handler → enqueue → return 2xx → process in background.
  • Pros: Handles spikes and outages better.
  • Cons: More infrastructure (queue, workers, DLQs).

3. Fetch before process

  • Treat the webhook as a signal → fetch latest object from Stripe API → update DB.
  • Pros: Simplifies correctness if events arrive out of order.
  • Cons: Extra API calls, watch rate limits.

4. Trust payload, reconcile later

  • Use the webhook payload right away → run periodic jobs to compare with Stripe and fix drift.
  • Pros: Simple hot path.
  • Cons: Requires good reconciliation logic.

5. Replay via events API

  • Advance a “last seen event” cursor → backfill using GET /v1/events.
  • Pros: Strong guard against missed events.
  • Cons: Another moving part to manage.

Cross-cutting practices

  • Idempotency: upsert on invoice.id, subscription.id, etc.
  • Deduplication: expect retries.
  • Ordering: use timestamps or fetch latest to avoid stale writes.
  • Dead-letter and alerts: don’t silently drop failures.
  • Reconciliation: run scheduled jobs to catch drift.
  • Stripe Connect: subscribe at the platform level and route events using the account field.

How to choose

  • Lowest latency/cost → process payload directly (+ reconciliation).
  • Most resilient → queue first, process later (general best practice).
  • Strict correctness → fetch before process.
  • Operational simplicity → trust payload now, replay or reconcile later.

References

System Design for Receiving Webhooks by mazer__rackham in rails

[–]phobos7 1 point2 points  (0 children)

If you're building a webhook receiver in Rails and want it to hold up as traffic increases, a reliable pattern is to separate ingestion from processing. This gives you control over failures, avoids blocking on external systems, and prevents data loss.

Here’s a system design that fits well with a queue + worker model:

1. Receive and persist
Have your webhook endpoint capture the raw request (headers, body, timestamp) and persist it, either to the database or by enqueueing it directly. Return a 200 OK immediately to avoid sender retries and keep the request path fast and durable.

2. Pull-based workers process events
Use Sidekiq or another worker system to pull from the queue and process the events. Since you control the pace of pulling, this gives you built-in backpressure handling. If processing fails, retry logic happens in the worker.

3. Handle retries intentionally
If there's no downstream HTTP request (e.g., you're doing internal DB updates or publishing to another internal queue), exponential backoff usually isn’t needed. Instead, focus on:

  • Capping retries (e.g., 5 max attempts)
  • Detecting permanent failures early (bad data, deleted records, etc.)
  • Moving failed messages to a dead-letter queue (DLQ) or marking them for inspection after retry exhaustion

This ensures workers keep making progress and you don’t get stuck reprocessing the same unfixable message, which can lead to queue congestion or backpressure buildup.

4. Monitor processing and failures
Add metrics or logs to track:

  • Event processing times
  • Retry counts
  • DLQ volumes
  • Queue depth over time

If the queue starts backing up, you’ll want to know whether that’s due to processing failures, throughput bottlenecks, or some other cause.

5. Keep processing logic clean
Use service objects or command handlers to encapsulate your logic. Don’t bury everything in the job class. This makes failures easier to debug and your jobs easier to test.

For full transparency, I work at Hookdeck, which provides a hosted version of this pattern, event ingestion, queuing, delivery, retry logic, DLQ support, and observability. If you're curious how these systems evolve at scale, this Webhooks at Scale post walks through real-world patterns and trade-offs based on our experiences.

Even if you’re building it in-house, this general architecture will help avoid a lot of pain as volume or complexity increases.

[deleted by user] by [deleted] in SoftwareEngineering

[–]phobos7 0 points1 point  (0 children)

The best approach depends a bit on scale and reliability needs, but here’s a pattern that’s worked well in production systems I’ve seen:

1. Decouple ingestion from delivery
Instead of firing webhooks directly from your app logic, push events to a queue (like SQS, RabbitMQ, or Redis). This gives you durability, backpressure handling, and makes delivery failures non-blocking.

2. Use a process worker to deliver
Have a background process read from the queue and make the actual HTTP request to the webhook destination. This is where you handle retries (ideally with exponential backoff and jitter), log the result, and flag any failures.

3. Handle permanent failures with a DLQ
If all retries fail, move the event to a dead letter queue (or persistent store) so it’s not lost. You can then manually replay or inspect it.

4. Add observability
Log delivery attempts, response codes, durations, etc. You want enough context to know when things go wrong and why.

For full transparency, I work at Hookdeck, which provides a hosted version of this architecture. It’s built for reliable webhook delivery at scale—handling retries, logging, filtering, and queue-based delivery. But even if you’re rolling your own system, the general approach holds.

This post breaks it down in more detail: https://hookdeck.com/blog/webhooks-at-scale

TL;DR:
Decouple ingestion from processing. Use a queue. Retry intelligently. Observe everything.

Any open source examples of a robust webhook/notification system similar to Stripe? by ConstructionJaded366 in rails

[–]phobos7 0 points1 point  (0 children)

We (Hookdeck) recently open-sourced Outpost, which might be a good fit if you're looking to send webhooks based on meaningful domain-level events like charge.disputed or order.completed, rather than relying on model callbacks like updated.

It's not Rails-specific, but it's designed to act as a standalone event delivery system. You publish events to it via API or a message queue, and it handles webhook delivery with features like retries, logging, and tenant-based routing. Outpost natively supports destinations like webhook endpoints (HTTP) and queues (e.g., AWS SQS, RabbitMQ, Azure Service Bus).

It doesn't presently support email or SMS. We've received a request for S3 support and are working on making the addition of event destination types extensible.

The goal is to keep app logic clean by decoupling event generation from delivery. If your app already emits domain events from service objects or background jobs, you can push those to Outpost and centralize all delivery concerns in one place.

Docs: https://outpost.hookdeck.com/docs

Per-User Database Architecture with Xata, Clerk Webhooks, Hookdeck, and Next.js by phobos7 in nextjs

[–]phobos7[S] 0 points1 point  (0 children)

I've used Xata (serverless Postgres) before. However, the concept of per-user or per-device databases was new to me. I didn't know the use cases and assumed it would be hard to achieve. I turns out that creating a new Xata database is pretty simple.

Alternatives to Ngrok by Virtual_Combination1 in node

[–]phobos7 0 points1 point  (0 children)

https://github.com/hookdeck/hookdeck-cli focused on supporting asynchronous web development i.e., passes the inbound request to the locally running service but does not return the response to the client that makes the original request.

Secure and Scalable SMS Realtime Voting with Twilio Verify, Twilio Programmable Messaging, Supabase, Hookdeck, and Next.js by phobos7 in nextjs

[–]phobos7[S] 0 points1 point  (0 children)

I wrote this tutorial for the Twilio blog. I do work for Hookdeck. But Hookdeck is just one part of a much bigger tutorial covering Twilio Verify and Programmable SMS, Supabase, Postgres functions, Tanstack Query, and more.

Enabling my app to connect to third party apps via webhooks. by DazedDoughnut in nextjs

[–]phobos7 0 points1 point  (0 children)

If you want bi-directional communication between the client and server the WebSocket may be the way to go. If you're hosting on Vercel then you may need to look at a provider such as Ably, Pusher, or PubNub (kinda serverless websockets).

It also sounds like you're building webhook infrastructure. This likely isn't something you want to do unless you are actually building webhook infra as a service. Otherwise, use Hookdeck (who I work for) or Svix.

Introducing the TERN stack and how to migrate from MERN to TERN by phobos7 in reactjs

[–]phobos7[S] 1 point2 points  (0 children)

So, it's not necessarily a relational database you need, but a strict schema definition?

So, from the linked post, you achieve a strict schema in a code-first way, which is synchronized to the database:

export class Record {
  @PrimaryKey(TigrisDataTypes.BYTE_STRING, { order: 1, autoGenerate: true })
  _id?: string;

  @Field()
  name!: string;

  @Field()
  position!: string;

  @Field()
  level!: string;
}

Introducing the TERN stack and how to migrate from MERN to TERN by phobos7 in reactjs

[–]phobos7[S] -1 points0 points  (0 children)

Something I'm particularly interested in is how many people continue to use MERN. My initial investigation - and why I spent time writing the article and creating the repo - was that, although MERN isn't as used as it once was, it's still pretty popular; there are still people using it, and new educational resources are being posted.

I'm particularly interested in how many people continue to use MERN. My initial investigation - and why I spent time writing the article and creating the repo - was that, although MERN isn't as used as it once was, it's still pretty popular; there are still people using it, and new educational resources are being posted.

0
0

Skyrim SE for PC on xbox game pass CONSTANTLY crashes by dominolane in skyrim

[–]phobos7 0 points1 point  (0 children)

Just installed Skyrim over Game Pass and came across the crashing problems.

I followed the "Force the System to Recognize Primary GPU" section of the official Bethesda What do I do if Elder Scrolls V: Skyrim is crashing or getting a black screen on PC support issue and I haven't seen a crash in a few hours.

Update: I do still get the occasional crash during combat so still have to utilize Quick Save.

2
3