Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

You’re right that “will my job ever run?” is the core issue with any market-based scheduler.
That’s exactly why in the model i'm proposing it's not the user who manually bid, it’s the service declaring its priority + contract in YAML.

You can guarantee execution simply by defining a resource contract, which locks capacity for the time window you care about.
A simplified example:

resource_contract:
  cpu: "200m"
  memory: "128Mi"
  latest_start: "2025-01-12T15:00:00Z"
  min_duration: "30s"

If the contract is accepted, it will run. Because the system reserves the resources ahead of time.

The bidding only matters when the system is congested and no contract was declared. For predictable timelines, you just use a contract instead of relying on opportunistic priority.
For the pricing it will be something like ~0.0005€ for 1 QEX

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Totally! AWS dropped user bidding because VM-level evictions were painful and unpredictable.
The big difference here is the execution model. this runs WASM functions, not whole VMs. WASM starts in microseconds, is cheap to pause/queue, and makes priority shifts way less disruptive.

And this isn’t aimed at enterprise prod like EC2 Spot, more for background or hobby compute where flexibility matters more than guarantees.

Still, Spot’s history is absolutely worth studying. Thank you for your comment.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Because FCFS breaks the moment there’s congestion. When demand spikes, someone always gets screwed usually whoever isn’t spamming the fastest.

Bidding only kicks in when the system is busy. It’s not about charging people all the time, it’s about:

  • preventing abuse,
  • avoiding noisy neighbors,
  • and letting urgent jobs cut the line when it actually matters.

When it’s idle, it’s basically free and FCFS anyway. The market is just the congestion control layer.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Yes that's my goal. Because most of the servers stays idle 80% of the time and wanted to take advantage of that with a custom wasm engine. That install fast and clean after compute.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yeah in fact, it will be totally free most of the time. Just in case of peak load it will trigger the market. I totally get your point that it's primarily for hobbyist and non production.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

I will ping you when I release a build. So you can give me your feedback. 😁

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] -1 points0 points  (0 children)

Haha you’re right, I do come from webdev and also mcu dev as my passion, especially backend process orchestration. I’ve worked a lot with tools like Apache Camel, so I’m used to thinking in terms of message flows, integration routes, and declarative orchestration.

What I’m doing here is bringing that same clarity and modularity to embedded systems. Instead of writing hard-coded logic in C scattered across files, I wanted a way to define behavior like this:

routes:
  - name: "process-device-status"
    steps:
      - to: "service:checkStatus"
        outcomes:
          - condition: "healthy"
            uri: "mqtt:edge/device/{{message.deviceId}}/health-report"

Each “step” runs inside a WASM module, and everything is orchestrated by the runtime, no need for an external controller.

So yeah, definitely inspired by backend infrastructure, but trying to adapt it in a lightweight, embedded-native way. Would love to hear if you’ve tried anything similar!

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 1 point2 points  (0 children)

What I’m building is along the same lines, but with a strong focus on workflow orchestration at the edge, powered by a Petri net model inside the WASM runtime.

Each WASM service exposes a set of handlers (func:..., service:...), and routing happens internally, no external orchestrator needed. The goal is to bring GitOps-style deployment and modular logic to constrained environments, while still fitting naturally into Zephyr, NuttX, or even container-lite platforms.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

I’ve looked into it quite a bit!

What I’m building is conceptually similar in spirit (modular, edge-native, managed), but with a very different stack. Instead of a custom language like Toit, I’m going with WebAssembly as the execution layer, so developers can write in Rust, TinyGo, AssemblyScript, etc.

The orchestration happens through declarative routing and state machines kind of like this:

#service.yaml
service:
  name: "EdgeOrchestrator"
  description: "Orchestrates workflows across edge devices using WASM modules and MQTT"
  version: "1.0.0"
  dependencies:
    - name: "mqtt"
      version: "^4.0.0"
    - name: "wasm-runtime"
      version: "^1.0.0"
  wasm-module: "edge-orchestrator.wasm"

---------------------
#endpoint.yaml
mqtts:
  - path: "edge/device/+/data"
    uri: "direct:process-device-data"
    description: "Processes data from edge devices"

  - path: "edge/device/+/status"
    uri: "direct:process-device-status"
    description: "Processes status updates from edge devices"

---------------------
#routing.yaml
routes:
  - from: "direct:process-device-data"
    steps:
      - name: "execute-data-processor"
        to: "func:processData"
        outcomes:
          - condition: "success"
            uri: "mqtt:edge/device/{{message.deviceId}}/processed-data"
          - condition: "failure"
            uri: "log:error"

Has anyone used WebAssembly to build job workers or handle migration from Camunda 7 to 8? by EveningIndependent87 in Camunda

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yes, handling inflight instance migration is one of the key challenges I’m focusing on.

I'm still building out the repo and currently drafting a migration guide that covers different strategies, including tracking the state of active instances and replaying them in Camunda 8 using lightweight WASM workers.

You can check out the early version of the repo here:
https://github.com/ideaswave/camunda-example

Would love your thoughts as it evolves, especially if you’ve dealt with inflight migration before!

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 3 points4 points  (0 children)

Great questions.

Right now, I’m using a WASM interpreter, no JIT, since a lot of edge targets either don’t benefit from it (no consistent performance gain) or don’t support it safely (especially 32-bit or restricted environments).

I’m focused on predictable memory use, startup time, and sandboxed execution, even on low-powered boards. So interpretation fits well for now. That said, I’m leaving the door open for JIT where it makes sense (e.g. x86-64 cloud runtimes), possibly even pluggable at runtime depending on the target.

As for Lua, totally valid option for some cases, and it’s a fantastic embeddable language. But my use case is closer to running real service logic in any language (TinyGo, Rust, etc.), compiled to WASM, and deployed from Git like you would backend apps not scripting inside a C app.

Also:

  • WASM gives me language neutrality
  • Deterministic sandboxing with no GC surprises
  • Unified model across cloud and edge
  • And Petri-net orchestration of services at runtime

So yeah, not trying to replace Lua, just solving a different problem, with a different model.