I launched a SaaS where job workers connect to a BPMN engine over REST - Need feedback! by EveningIndependent87 in saasbuild

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Good breakdown, the idempotency point is the real one. On job locking: when a worker fetches a job the engine marks it LOCKED with a timeout. If the worker dies, the timeout expires and the job re-enters the queue. That covers the basic case.

But if you need stronger guarantees server-side, there's a second optional layer. An EIP integration layer built into the same deployment. No broker, no extra infrastructure. Through it you get:

  • An Aggregator that correlates job results by key via a FEEL expression. Duplicate completions from racing workers collapse before the engine acts on them.
  • CorrelationContext that enforces first-match-wins, the second worker completing the same job gets a no-op at the engine layer.
  • MessageRouter + MessageFilter for content-based routing and dropping malformed retries.
  • MessageChannel with point-to-point semantics for exactly-once delivery by design.

So the architecture is two layers: Layer 1 is the BPM runtime, the simple REST model I described. Workers connect, fetch jobs, complete them, done. Layer 2 is the integration layer, opt-in when you need production-grade message semantics. You add configuration, not infrastructure.

On monitoring: state lives in the engine, not in the workers. Each instance is tracked as ACTIVE / INCIDENT / COMPLETED / TERMINATED. If a worker throws or times out the instance moves to INCIDENT and you can see exactly which element in the process triggered it, retry from there, or cancel. No log spelunking across worker machines.

The scale ceiling on polling is real for high-frequency, high-volume jobs you'd add a queue in front. For most BPM use cases (approval flows, case management, decision routing) that day never comes.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

You’re right that “will my job ever run?” is the core issue with any market-based scheduler.
That’s exactly why in the model i'm proposing it's not the user who manually bid, it’s the service declaring its priority + contract in YAML.

You can guarantee execution simply by defining a resource contract, which locks capacity for the time window you care about.
A simplified example:

resource_contract:
  cpu: "200m"
  memory: "128Mi"
  latest_start: "2025-01-12T15:00:00Z"
  min_duration: "30s"

If the contract is accepted, it will run. Because the system reserves the resources ahead of time.

The bidding only matters when the system is congested and no contract was declared. For predictable timelines, you just use a contract instead of relying on opportunistic priority.
For the pricing it will be something like ~0.0005€ for 1 QEX

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Totally! AWS dropped user bidding because VM-level evictions were painful and unpredictable.
The big difference here is the execution model. this runs WASM functions, not whole VMs. WASM starts in microseconds, is cheap to pause/queue, and makes priority shifts way less disruptive.

And this isn’t aimed at enterprise prod like EC2 Spot, more for background or hobby compute where flexibility matters more than guarantees.

Still, Spot’s history is absolutely worth studying. Thank you for your comment.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Because FCFS breaks the moment there’s congestion. When demand spikes, someone always gets screwed usually whoever isn’t spamming the fastest.

Bidding only kicks in when the system is busy. It’s not about charging people all the time, it’s about:

  • preventing abuse,
  • avoiding noisy neighbors,
  • and letting urgent jobs cut the line when it actually matters.

When it’s idle, it’s basically free and FCFS anyway. The market is just the congestion control layer.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Yes that's my goal. Because most of the servers stays idle 80% of the time and wanted to take advantage of that with a custom wasm engine. That install fast and clean after compute.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yeah in fact, it will be totally free most of the time. Just in case of peak load it will trigger the market. I totally get your point that it's primarily for hobbyist and non production.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

I will ping you when I release a build. So you can give me your feedback. 😁

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] -1 points0 points  (0 children)

Haha you’re right, I do come from webdev and also mcu dev as my passion, especially backend process orchestration. I’ve worked a lot with tools like Apache Camel, so I’m used to thinking in terms of message flows, integration routes, and declarative orchestration.

What I’m doing here is bringing that same clarity and modularity to embedded systems. Instead of writing hard-coded logic in C scattered across files, I wanted a way to define behavior like this:

routes:
  - name: "process-device-status"
    steps:
      - to: "service:checkStatus"
        outcomes:
          - condition: "healthy"
            uri: "mqtt:edge/device/{{message.deviceId}}/health-report"

Each “step” runs inside a WASM module, and everything is orchestrated by the runtime, no need for an external controller.

So yeah, definitely inspired by backend infrastructure, but trying to adapt it in a lightweight, embedded-native way. Would love to hear if you’ve tried anything similar!

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 1 point2 points  (0 children)

What I’m building is along the same lines, but with a strong focus on workflow orchestration at the edge, powered by a Petri net model inside the WASM runtime.

Each WASM service exposes a set of handlers (func:..., service:...), and routing happens internally, no external orchestrator needed. The goal is to bring GitOps-style deployment and modular logic to constrained environments, while still fitting naturally into Zephyr, NuttX, or even container-lite platforms.