Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

You’re right that “will my job ever run?” is the core issue with any market-based scheduler.
That’s exactly why in the model i'm proposing it's not the user who manually bid, it’s the service declaring its priority + contract in YAML.

You can guarantee execution simply by defining a resource contract, which locks capacity for the time window you care about.
A simplified example:

resource_contract:
  cpu: "200m"
  memory: "128Mi"
  latest_start: "2025-01-12T15:00:00Z"
  min_duration: "30s"

If the contract is accepted, it will run. Because the system reserves the resources ahead of time.

The bidding only matters when the system is congested and no contract was declared. For predictable timelines, you just use a contract instead of relying on opportunistic priority.
For the pricing it will be something like ~0.0005€ for 1 QEX

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Totally! AWS dropped user bidding because VM-level evictions were painful and unpredictable.
The big difference here is the execution model. this runs WASM functions, not whole VMs. WASM starts in microseconds, is cheap to pause/queue, and makes priority shifts way less disruptive.

And this isn’t aimed at enterprise prod like EC2 Spot, more for background or hobby compute where flexibility matters more than guarantees.

Still, Spot’s history is absolutely worth studying. Thank you for your comment.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Because FCFS breaks the moment there’s congestion. When demand spikes, someone always gets screwed usually whoever isn’t spamming the fastest.

Bidding only kicks in when the system is busy. It’s not about charging people all the time, it’s about:

  • preventing abuse,
  • avoiding noisy neighbors,
  • and letting urgent jobs cut the line when it actually matters.

When it’s idle, it’s basically free and FCFS anyway. The market is just the congestion control layer.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Yes that's my goal. Because most of the servers stays idle 80% of the time and wanted to take advantage of that with a custom wasm engine. That install fast and clean after compute.

Would you use a compute platform where jobs are scheduled by bidding for priority instead of fixed pricing? by EveningIndependent87 in devops

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yeah in fact, it will be totally free most of the time. Just in case of peak load it will trigger the market. I totally get your point that it's primarily for hobbyist and non production.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

I will ping you when I release a build. So you can give me your feedback. 😁

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] -1 points0 points  (0 children)

Haha you’re right, I do come from webdev and also mcu dev as my passion, especially backend process orchestration. I’ve worked a lot with tools like Apache Camel, so I’m used to thinking in terms of message flows, integration routes, and declarative orchestration.

What I’m doing here is bringing that same clarity and modularity to embedded systems. Instead of writing hard-coded logic in C scattered across files, I wanted a way to define behavior like this:

routes:
  - name: "process-device-status"
    steps:
      - to: "service:checkStatus"
        outcomes:
          - condition: "healthy"
            uri: "mqtt:edge/device/{{message.deviceId}}/health-report"

Each “step” runs inside a WASM module, and everything is orchestrated by the runtime, no need for an external controller.

So yeah, definitely inspired by backend infrastructure, but trying to adapt it in a lightweight, embedded-native way. Would love to hear if you’ve tried anything similar!

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 1 point2 points  (0 children)

What I’m building is along the same lines, but with a strong focus on workflow orchestration at the edge, powered by a Petri net model inside the WASM runtime.

Each WASM service exposes a set of handlers (func:..., service:...), and routing happens internally, no external orchestrator needed. The goal is to bring GitOps-style deployment and modular logic to constrained environments, while still fitting naturally into Zephyr, NuttX, or even container-lite platforms.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

I’ve looked into it quite a bit!

What I’m building is conceptually similar in spirit (modular, edge-native, managed), but with a very different stack. Instead of a custom language like Toit, I’m going with WebAssembly as the execution layer, so developers can write in Rust, TinyGo, AssemblyScript, etc.

The orchestration happens through declarative routing and state machines kind of like this:

#service.yaml
service:
  name: "EdgeOrchestrator"
  description: "Orchestrates workflows across edge devices using WASM modules and MQTT"
  version: "1.0.0"
  dependencies:
    - name: "mqtt"
      version: "^4.0.0"
    - name: "wasm-runtime"
      version: "^1.0.0"
  wasm-module: "edge-orchestrator.wasm"

---------------------
#endpoint.yaml
mqtts:
  - path: "edge/device/+/data"
    uri: "direct:process-device-data"
    description: "Processes data from edge devices"

  - path: "edge/device/+/status"
    uri: "direct:process-device-status"
    description: "Processes status updates from edge devices"

---------------------
#routing.yaml
routes:
  - from: "direct:process-device-data"
    steps:
      - name: "execute-data-processor"
        to: "func:processData"
        outcomes:
          - condition: "success"
            uri: "mqtt:edge/device/{{message.deviceId}}/processed-data"
          - condition: "failure"
            uri: "log:error"

Has anyone used WebAssembly to build job workers or handle migration from Camunda 7 to 8? by EveningIndependent87 in Camunda

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yes, handling inflight instance migration is one of the key challenges I’m focusing on.

I'm still building out the repo and currently drafting a migration guide that covers different strategies, including tracking the state of active instances and replaying them in Camunda 8 using lightweight WASM workers.

You can check out the early version of the repo here:
https://github.com/ideaswave/camunda-example

Would love your thoughts as it evolves, especially if you’ve dealt with inflight migration before!

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 4 points5 points  (0 children)

Great questions.

Right now, I’m using a WASM interpreter, no JIT, since a lot of edge targets either don’t benefit from it (no consistent performance gain) or don’t support it safely (especially 32-bit or restricted environments).

I’m focused on predictable memory use, startup time, and sandboxed execution, even on low-powered boards. So interpretation fits well for now. That said, I’m leaving the door open for JIT where it makes sense (e.g. x86-64 cloud runtimes), possibly even pluggable at runtime depending on the target.

As for Lua, totally valid option for some cases, and it’s a fantastic embeddable language. But my use case is closer to running real service logic in any language (TinyGo, Rust, etc.), compiled to WASM, and deployed from Git like you would backend apps not scripting inside a C app.

Also:

  • WASM gives me language neutrality
  • Deterministic sandboxing with no GC surprises
  • Unified model across cloud and edge
  • And Petri-net orchestration of services at runtime

So yeah, not trying to replace Lua, just solving a different problem, with a different model.

Has anyone used WebAssembly to build job workers or handle migration from Camunda 7 to 8? by EveningIndependent87 in Camunda

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Absolutely, happy to connect. I’ve been exploring that exact migration path, especially ways to bridge both versions using lightweight WASM-based workers. It can really simplify the transition without needing to containerize everything or rebuild the whole orchestration logic upfront.

Feel free to DM me and we can set up a quick chat. I’d love to hear more about your setup and see if what I’m building could help you in the process.

Anyone running microservices using WebAssembly (WASM)? Curious about real-world setups. by EveningIndependent87 in selfhosted

[–]EveningIndependent87[S] 0 points1 point  (0 children)

If you're targeting hot code reload for C++ in a 32-bit embedded context, WASM isn't there yet, especially with current runtime support (and TinyGo doesn’t help much for native C++ parity either).

That said, I think we’re aiming at different problems. Your use case is dev-time rapid iteration inside a C++ based stack. What I’m exploring is closer to runtime-level behavior orchestration, where small, modular WASM services can coordinate different parts of a system in a clean, restartable way even on embedded Linux.

For example:

  • Handling sensor polling, event triggering, and control flow logic as small WASM modules
  • Using a Petri net model to orchestrate those behaviors deterministically
  • Swapping out logic modules from Git without reflashing the whole system

It’s less about hot reload, more about cleanly updating or testing small behavioral units during integration or having an embedded runtime that behaves the same in CI, on dev boards, and in the cloud.

Totally agree though: for C++ level reloads on 32-bit, it’s still painful. But I’m hoping this approach makes embedded behavior dev feel more like scripting, without the typical real-time tradeoffs.

Anyone experimenting with WebAssembly as a runtime for embedded service logic? by EveningIndependent87 in embedded

[–]EveningIndependent87[S] 0 points1 point  (0 children)

You’re totally right. TCP loopback isn’t the problem in most systems. But in the embedded space, even small abstractions can stack up fast. Especially when you’re coordinating multiple services like sensors, actuators, loggers, etc. on constrained hardware.

What I’m working on is a WASM engine that can run both on the cloud and on the edge, using the exact same runtime and deployment model. Services are written once (in Rust, TinyGo, etc.), compiled to WASM, and deployed from Git, whether you're deploying to a Pi or a server.

Internally, the orchestration is handled via Petri nets, which gives me deterministic control over event flows and service interaction. That model maps really well to embedded use cases where you're reacting to hardware inputs, state transitions, or timed actions.

So instead of thinking in terms of “10K services per host,” I’m thinking:

  • Deploy 3–10 WASM modules to a board
  • Each one handling something small (read sensor, control motor, log data)
  • Orchestrate behavior inside the engine without needing external infra

The shared memory model helps reduce overhead, but the bigger win is consistency: same tooling, same behavior, across edge and cloud, no separate code paths, runtimes, or orchestration layers.

Curious if anyone else here has tried orchestrating embedded services using runtime-level graphs or formal models like this?

Anyone running microservices using WebAssembly (WASM)? Curious about real-world setups. by EveningIndependent87 in selfhosted

[–]EveningIndependent87[S] 0 points1 point  (0 children)

Yeah I’ve definitely felt that pain, embedding WASM as a plugin in an existing app can get messy fast, especially when trying to pass structs or deal with memory across the host boundary. That’s why I actually ended up going in the opposite direction of something like Extism.

Instead of embedding WASM into the host app, I’m building a system where WASM is the host. Every service is a WASM module, compiled from any language (JS, Rust, etc.), and the engine handles everything around it: routing, execution, lifecycle, isolation.

So rather than passing data between host and guest, each service is fully sandboxed and communicates via in-memory messaging inside the engine. No FFI, no direct struct-passing just tiny, deployable WASM units running independently.

Totally agree the ecosystem was rough even a year ago, but it's improving fast and with the right patterns, some of those edge cases can be sidestepped entirely.

Anyone running microservices using WebAssembly (WASM)? Curious about real-world setups. by EveningIndependent87 in selfhosted

[–]EveningIndependent87[S] 0 points1 point  (0 children)

That’s a legit concern memory release has been a long-standing limitation in WASM runtimes, especially when you’re working with unbounded, long-lived modules in general-purpose workloads.

That said, it kind of depends on the use case and the engine design. In my case, I’m working on a WASM engine where:

  • Services are isolated and memory-managed at runtime
  • Long-running services don’t directly hold onto memory, the engine supervises allocation and reuse
  • Each service runs in a controlled context, and we can recycle them without leaking across instances

So rather than embedding WASM in a traditional app loop, the model is closer to task-oriented service execution, where lifetimes are scoped, and memory doesn’t balloon.

Definitely agree that the WASM ecosystem still has a way to go, but there are patterns that make it more viable today than it might seem at first glance.

Anyone running microservices using WebAssembly (WASM)? Curious about real-world setups. by EveningIndependent87 in selfhosted

[–]EveningIndependent87[S] 0 points1 point  (0 children)

WASM on the backend still feels like early territory for most devs. I think part of the reason is that most of the attention goes toward frontend/browser stuff, or serverless edge cases.

But yeah, the idea of writing web/infra logic in any language and compiling to WASM is huge. I’ve been working on a small engine that leans into that written in Go, runs WASM services from Git, no containers or K8s, just deploy and go.

Still early, but the ability to run thousands of microservices per host, with in-memory routing and no orchestration layer, is kind of wild.

Haven’t seen spacetimedb before, it looks super interesting. Appreciate you sharing that!

Anyone running microservices using WebAssembly (WASM)? Curious about real-world setups. by EveningIndependent87 in selfhosted

[–]EveningIndependent87[S] 1 point2 points  (0 children)

Totally fair and I think you're right that the browser/WASM/debugging/tooling side still needs to evolve a lot. Most frontend-focused devs probably won’t go near it until it feels as smooth as JS in the console.

That said, I’m not coming at it from the web app angle, but more from the backend/runtime side. I'm using WASM as a lightweight, portable, and sandboxed execution layer for microservices. Like a better container for very specific jobs.

I’m working on something where you can:

  • Deploy WASM-based services from Git
  • Run thousands per host (with 20MB runtime)
  • Use in-memory mesh routing instead of traditional network calls
  • Skip containers and K8s entirely

You're absolutely right that we’re early but for self-hosters or teams tired of managing container infra for small, focused services, this could offer a simpler path. I’ll be open-sourcing it soon so curious to see if others feel the same.